LindaAthena is right.
There should be old functions support for a long period of time.
I think you missed my point. How long to keep old functions around is a balancing act. Furthermore, you can't compare how long MS supports something that you and millions of people pay money to for years of support versus "how long" an open-source project is obligated to provide long-term support (at the expense of new features or compatibility w/new OS's or new OS features).
One of the "features" (good and bad) about open source is that not as many dollars need to be spent on support since no one is paying for the product. In watching people work on the linux distro I use, I see them going for running a 'trusted OS' that could be used by vendors to allow you to rent movies or games and be safe from piracy or having to pay for previous software compatibility. Sorta like 'Steam' requires you to 'reboot' there program (closing any game or forums you are in to switch to their 'secure mode'. At the same time, Intel is developing ways of automatically controlling what programs run on a secure boot, so you could only run the software your computer vendor 'rents' to you.
You'll technically 'own' your PC, but MS has plans to require Windows & Microsoft certified HW to only allow 'secure boot', perhaps as early, sometime in the lifetime of Windows-10. In Windows-8 they only require(d) secure boot from manufacturers using non-Intel chip, like mips or PowerPC. Essentially those things are 'consoles' that you have paid PC prices for. Yuck! Even though many have complained about the direction open-software is taking -- those in control of the software say it is a "do-ocracy" -- those who "do" get to make the decisions.
However, theoretically, it is _open source_, so you can always step up and do it your way. But not every one has the time or ability to redo every program that they use or want to change. My distro shipped over 70,000 software packages in their latest release (13.2) If I 'only' needed to change .1% of programs to meet my "needs" (wants), that's still 70 programs I need to learn and change -- ouch!
But the option IS still available, and I do patch and rebuild many programs I use on linux. You too can patch programs (or learn how), or pay someone else to do it if you really don't have the time to do it yourself.
I still don't know why wj32 want no plugin developers & new plugins.
wj32 has never said they don't want plugin developers -- if that was the case why would they have provided a plugin interface in the first place?
If you compare ph with other open source which support 3rd party plugins, all have many plugins & plugin developers except ph.
...like linux? yes, they have lots of developers --- but Linus's policy on the internal API of linux is that it is subject to change at ANY time -- and he has changed it and forced everyone to jump.
You also have to compare apples to apples -- what other system-introspection software is out there in ANY product that has as many features as PH? The only other two projects that might be considered for OS introspection might
be sysinternals process explorer or Microsoft's Task Manager -- NEITHER of them, AFAIK, support plugins, because as the OS evolves, they may have to evolve way too much to keep compatibility with previous plugins.
However, it seems like you are offering a solution:
I think you can add those missing functions in 2.36
& Resolve this issue immediately
Or provide a dll which loads before 3rd party plugins load to use older plugins.
It sounds like you are volunteering to add the missing function and/or provide a dll to solve the problem that wj could test and include -- problem solved.
I certainly am impressed that you are willing to step up and put your efforts on the line to keep the functions you _need
Hey, while you are in there working on the code -- ALL of the measurements displayed are in *absolute* numbers -- amount of memory allocations/s or amount of MB of data (I/O) per second, etc. All of them *except* cpu, which is displayed as cpu time used on the system, then divided by the # of cpu's, then divided by the # of cpu cycles (which vary) or divided by absolute time. But since cpu seconds have already been divided by the # of "similar-x86-64" processors in the machine, cpu time is already divided by some arbitrary number that prevents obvious machine-to-machine performance metrics and hides the nature of programs -- specifically whether they are single threaded or efficient.
For example, if a file transfer program only spends 5% of the cpu when running with 25 different filestreams -- that sounds rather efficient.
Because of that efficiency you think you want to try it on an old machine you want to dedicate to serving files. But when you try it, the machine locks up. Your old single-core machine can't begin to match the speed of 1 of the test machine's 20 cores (2 sockets, 10 cores each).
To make matters worse, the disks on the test machine only showed the disks as 3-5% busy -- lots of spare I/O... but again, if you display I/O as a percent of of time spent waiting for I/O, it would hide a 24 disk RAID10 (w/12 stripes) that can easily do 1.2GB/s for large reads and writes ... compared to that old machine you thought you would try to use as a server that only has a 1 disk (on RAID).
Anytime you divide your final figure by the # of HW service units you end up with a figure of dubious worth, no?
Well, on 2nd thought, never mind, maybe I'll get to doing that myself some day (unlikely given the other projects on my plate, but pigs might fly!
Cheers and waiting to see what your new DLL is going to do... Sorry, this post is so long -- I didn't really have the time to make it shorter... ;-(