I've been reading a lot of the various comments and observing the overall gasps and guffaws surrounding Herb Sutter's DDJ and C/C++ User's Journal articles regarding the end of the line for Moore's Law. Granted, this is actually the first time in nearly 30 years that processor speeds haven't grown at the “normal“ meteoric rate. However, I don't think this should be cause for immediate concern. In fact this should be seen as a good thing. As the old proverb goes, “neccessity is the mother of invention,” so too do I think that this may be a good time for the PC vendors to focus on other aspects of the whole system. Rather than focusing on the core processor for all your speed gains, they should turn their attention to the memory bus. This is the number one bottle-neck of a system. What if all the memory (RAM that is..) ran on the same clock as the core CPU? What if a new value from the memory could be fetched in a single clock cycle? Sure, there are small amounts of memory that do use the internal CPU clock, the level 1 cache. But those are just cheats and tricks. In practice, the cache does a fairly good job of keeping the CPU monster fed with new instructions and data to push around, but as soon as the level 1 cache became exhausted, they added a cache for the cache, the level 2 cache. This cache is usually 2-4 times the size of the level 1 cache and runs approximately that much slower. Some systems even add a level 3 cache. What's next? A level 4 cache... oh wait.. that's the main system memory ;-)..
Now there are some new architectures that are being used. For instance, Non-Uniform Memory Access or NUMA, is an interesting technique for multiple CPU systems to reduce cross CPU contention. By giving each CPU its own chunk of system memory that only it can access, each CPU can now run without too much worry that the other CPU may be accessing the memory at the same time. Mainly because the CPUs have to negotiate with each other in order to get access to the memory that they don't control.
In a nutshell that is the state of the hardware. What about the software that runs on these systems? I think Julian Bucknall has covered that detail quite well. Basically, it would be a good idea to get used to writing code to take advantage of multi-processor architectures. This will present a whole new raft of problems, for sure, but there are a multitude of techniques for solving these problems.
We, on the Delphi team, are looking into various things we can do to help the users write better code for these architectures. Everything from RTL/VCL support to language enhancements are all on the plate for us to look into. Again, “neccessity is the mother of invention,” and this current stalling of Moore's Law may just be the catalyst that is needed for the software tools industry to step in and lend a hand. This is an opportunity.