© March 1999 Tony Lawrence
"126" height="32" />
Order (or just read more about) High Performance Computing from Amazon.com
Is a Pentium 200 a high performance computer? Certainly if you compare it to 1970's computing, it is. In fact, the small computers of today have many of the features of yesterdays supercomputers. The Pentium Pro CPU, for example, is really a RISC design with pipelining, out of order execution and speculative execution: features that only a few years back were found only in extremely expensive systems.
That's why it's always interesting to read a book like this. Some of what this book covers still is in the range of millions of dollars, but the scale slides downward very quickly, and the concepts of the supercomputers reach our desktops amazingly soon.
Most of the focus of this book is on the big iron. There's little mention of OS issues at all, except in a high level, very generic sense. The authors language of choice is Fortran, and they seem quite adamant that nothing else is of much value. That's a prejudice that is easy to ignore, however, because everything discussed here really applies to any language.
The book begins with a discussion of RISC (Reduced Instruction Set Computer), and they make a good point:
People will argue forever but, in a sense, reducing the instruction set was never an end in itself, it was a means to an end.
That end, of course, is faster computing. Intel MMX instructions are a good example of adding complex instructions (instructions that wouldn't fit in the old RISC model) that increase the overall speed of processing. Out of order and speculative processing really have nothing to do with the size of the instruction set, either.
Caches and memory designs are covered next, and then it branches into software issues for high performance machines, which is what most of the book is about: how to optimize software for the characteristics of the underlying hardware. Most of the code examples given are Fortran, but there is a grudging nod to C now and then, and really the language isn't that important, because what they are really doing is showing how ignorance of the organization and size of caches can lead to code that unnecessarily accesses slower main memory more often than it needs to. Few of us need to worry about such things very often, but it's always good to understand the issues.
Got something to add? Send me email.
More Articles by Tony Lawrence © 2009-11-07 Tony Lawrence