Ever since the invention of the computer, users have demanded more and more computational power to tackle increasingly complex problems. A common means of increasing the amount of computational power available for solving a problem is to use parallel computing. Unfortunately, however, creating efficient parallel programs is notoriously difficult. In addition to all of the well-known problems that are associated with constructing a good serial algorithm, there are a number of problems specifically associated with constructing a good parallel algorithm. These mainly revolve around ensuring that all processors are kept busy and that they have timely access to the data that they require. Unfortunately, however, controlling a number of processors operating in parallel can be exponentially more complicated than controlling one processor. Furthermore, unlike data placement in serial programs, where sophisticated compilation techniques that optimise cache behaviour and memory interleaving are common, optimising data placement throughout the vastly more complex memory hierarchy present in parallel computers is often left to the parallel application programmer. All of these problems are compounded by the large number of parallel computing architectures that exist, because they often exhibit vastly different performance characteristics, which makes writing well-optimised, portable code especially difficult.The primary weapon against these problems in a parallel programmer’s or parallel computer architect’s arsenal is – or at least should be –the art of performance prediction. This book provides a historical exposition of over four decades of research into techniques for modelling the performance of computer programs running on parallel computers.
{{comment.content}}