Memory and CPU optimizers: Are they gimmicks?

2

0

Are tools which analyse which enable you to use all your CPU cores and also tools which help you to regain memory (can't think of any just yet but seen plenty) a gimmick? Do these tools really work?

GurdeepS

Posted 2010-04-13T20:30:56.937

Reputation: 723

The "regain memory" ones along the lines of HijackThis!, are reasonable, they help you disable Windows and 3rd-party services and background / start-up processes that you may not need. Beyond that, meh. – mctylr – 2010-04-13T22:12:21.833

Answers

5

Stay away from the "memory optimizers". They don't help you. Leave memory management to the OS!

This lists the reasons why: http://www.t3chnophilia.com/2008/08/5-reasons-you-shouldnt-use-memory.html

sinni800

Posted 2010-04-13T20:30:56.937

Reputation: 3 048

5

Memory Optimizers

There was a time when DOS 386/486 users had to manually optimize their config.sys and autoexec.bat files carefully to load as much stuff "high" (above 640K) as possible. Loading the wrong thing first could bump everything else down into your precious 640K application space.

Then came memory optimizers that figured this stuff out for you automatically. They reached true usefulness and maturity right about the time that Windows 95 made them all obsolete.

System Optimizers

Other things that speed optimizers do include emptying your trash can, defragging your disk, cleaning your registry, removing old install logs, uninstalling unneeded drivers and uninstalling fragments of old programs.

If they happen to clean up something right at a threshold, the speed improvement might be dramatic. Usually, not.

As you can guess, you can do all these things yourself without spending money.

kmarsh

Posted 2010-04-13T20:30:56.937

Reputation: 4 632

Don't forget MS-DOS (6.0+ I think) memmaker tool from Microsoft that came with later versions of MS-DOS. http://www.easydos.com/memmaker.html

– mctylr – 2010-04-13T22:06:37.163

3

Judging by your username, I'd say you're a software developer like myself.

If an application isn't written to take advantage of multi-threading, a helper application isn't going to do anything for you.

If you're running a multi-threaded application, and it's written properly, the thread scheduling algorithms in Windows are going to be just fine. Programs like this are there to take advantage of people who don't know what's going on under the hood.

Almost all modern computers have at least two processors, which are generally referred to as “cores”.

Should be a dead give-away since we all know that a core and a processor are two different things.

Justin Niessner

Posted 2010-04-13T20:30:56.937

Reputation: 259

Yep, I'm a developer. I could see no way how a 3rd party app can change the behaviour of an app - the only way to make use of cores etc - because the application code would need to change. Seems like the company is trying to target non-techies. – GurdeepS – 2010-04-13T20:46:48.027

It most definitely is. It's amazing what you can do if you're willing to flat out lie to the people you're selling your software to. – Justin Niessner – 2010-04-13T20:48:05.247

3

There may be a grain of truth in the claims of that particular program, but by my understanding it certainly won't do a fraction of what they claim its potential effects are except maybe under unrealistic benchmarks run in very artificial testing conditions (the claims made may be completely true, but only under conditions that you'll never likely experience).

It sounds like the utility is tweaking processor/core affinity. By default processes may bounce between cores quite bit on most operating systems - for instance you will sometimes see a program apparently using ~25% of four cores or ~50% of two rather than ~100% of one.

With most, if not all, multi-core and/or multi-processor designs a single thread within a process (which could be the only thread in a single-threaded process) that is CPU bound and works on blocks of data smaller than the CPUs L1/L2 cache for most of its operation will operate more efficiently if tied to one particular core rather than being allowed to bounce between them. This is because moving between cores will more likely mean the new core's L1 cache needs to be "primed" (from L2/L3 cache or main RAM) before a tight loop that the another core's cache may already be primed for can run a peak efficiency. This is worse if L2 or L3 cache has no useful code/data in them either, which is likely to be the case if the thread has jumped to a different physical CPU. The difference is going to be minimal at best though. To make a difference the process/thread needs to be looping through a limited set of data many many times and even if it switches cores once or twice a second re-priming the L1 cache that many times is unlikely to make significant amount unless you are timing an operation that will take days in total.

"Why don't OSs set process/thread affinity more aggressively if it could make a difference?" is the question that should jump to your mind at this point. Basically, in the general case it make such little difference and in cases where there are several CPU bound processes or threads that could between them use all your CPU power getting affinity wrong can slow things down by more than getting it right would help, and you have all the extra computation involved in "guessing" what is right for the current circumstances which could kill or reverse any general benefit. And to top it all off, hard and fast rules are few and far between as what may be best for jugging certain processes on one particular architecture might be much less optimal on another. So your OS will be optimised for the general case by doing a little such optimisation but not trying too hard. If you are going something very specific (say, a very specific number-crunching or data processing process that is almost the only thing running on the machine) your OS will allow your as a sufficiently privileged user to tweak the affinity of processes and will allow the programmer to give the OS hints about what might be more efficient too - but tweaking wrongly may get in the way of the scheduler rather then help it.

Long story short: the distributors of that software might be using a technique that in theory will help the way they say it does, but the in my by my understanding you are not going to notice the difference if it does and it is equally likely to make things slower. You'll probably waste many many times the minutes installing this and playing with its settings than it would ever save your CPU in decades of constant 24/7 computationally intensive operation!

Even shorter: Almost, but not quite, entirely snake oil.

Caveat: In my opinion, assuming my knowledge pertaining to the area is correct!

David Spillett

Posted 2010-04-13T20:30:56.937

Reputation: 22 424

1

I'm a developer, so I'll anwser this question from a C/C++ perspective. Assuming I'm writing a GUI program and don't anywhere in it break out worker threads using CreateThread() then it is single core and nothing you can do will help. If I am using CreateThread() and CreateProcess() then I as a programmer can attempt to set affinity. I can also set scheduling priorities, although these are approximates based on these sorts of descriptions: Low, Normal, High, BelowNormal, AboveNormal. The actual underlying thread levels are 1-16 (17-32 for kernel level code) but you can't map to these directly, only get put in a bin as per scheduler calculations. I have no more control than that over how I control my threads. The internals of the Windows Schedular are not known outside of Microsoft, partly because Microsoft changes them between Windows releases to ensure they keep atop of system usage and make things go as smoothly as possible.

The idea that another application is going to help is ridiculous, unless it is going to break my process across cores. Doing so would be very difficult if not at kernel level and may well break the application anyway. Attempting to force the scheduler to do something different is also a waste of cpu cycles, since you have to make a calculation to speed things up and then apply it to your threads and processes.

Finally, you could make your system more unstable using tools like these. Antivirus scanners run at low priority for a reason, for example. Read up on the niceness value of a process under Linux (it's the same concept in Windows: basically, processes are considered not good if they hog resources. Under linux, they actually get less time for such bad behaviour. Under windows, they just hog resource and slow everything down). So whilst the program should be able to cope, your tools might give processes more priority than they actually really want, or less when they need it.

As others have said, this is best left to the OS.

user26996

Posted 2010-04-13T20:30:56.937

Reputation: