New technology in the computer sciences has been stagnant for a couple of decades now. Chipsets are much faster, but the new speed has been offset by poorly written compilers, software and disk I/O. Seymore Cray was the last great innovator. Cray systems had compilers that generated highly efficient executibles. I/O was performed by using solid state technology to enhance the slowness of hard drive access. Applications were not bloated fat binaries and dlls that slow the system to a crawl. All of this was created, albeit at a high cost at the time, by the early 80s.
All of this should have reached the desktop by now. However, today's "systems" have too many bottlenecks in their architecture. The worst is the hard drive. Why is there not a very high speed connection using solid state storage that has very little latentcy and could be attached to the system board at a much higher speed that is available with any hard drive system. Texas Memory Systems still has to connect using a slow connection but is capable of so much more in speed. The chips would be cheap and cool, while a disk backup would run in the background.
Clearly, there has been little technology that has trickled down to the desktop when these PCs are using 40 year old data storage technology. Compilers have been getting worse in optimization leading to bloated software that runs slowly, even slower with dynamic link libraries that duplicate modules that have even been compiled using different generations of compilers and compiler options. Java and it's ilk are little more than a perpetual beta program that few, if any people can get it to perform at a reasonable speed.
As long as desktops or even existing PC servers continue with the status quo, systems will take up more space and accomplish less than they should. Today's handhelds have more power than mainframes twenty five years ago but are little more than toys by comparison.
Dual Core G5, the Last Computer you Ever Buy?