18 of 18 people found the following review helpful
Randy Allen and Ken Kennedy are famous for their contributions
to compiler design theory. This book is a clearly written
discussion of the issues involving loop optimization and
dependence analysis. While this book also covers scalar
optimization issues, it is naturally complemented by Steven
S. Muchnick's excellent book "Advanced Compiler Design and
Randy Allen has spent many years implementing a variety of
compilers for supercomputers and hardware design languages.
While Ken Kennedy has published seminal theoretical work on
compiler optimization, he has also been involved in hands on
implementation as well. The experience of these two authors
results in a book which covers the huge body of knowledge in
compiler optimization and provides this knowledge in a
practical form that can be used by software engineers working
on compiler design.
For anyone working on modern compilers that require sophisticated
optimization features, this is an important reference work.
As with Muchnick's book, I have owned this since it was first
published. Rereading it reminds me of what a gem this work is.
12 of 13 people found the following review helpful
on February 5, 2003
As a researcher in the field, this book was immediately useful to me. Nearly every source code transformation and optimization technique that I'm aware of is present in this book, which often saves sifting through stacks of papers or looking for an elusive reference. If you're looking for a book to teach you the basics of how compilers work, it certainly is not the appropriate place to begin, but if you already have one good book on that then this book will make an excellent companion to it. It was slightly annoying that the book comes with two loose pages, one errata list and another to tape over a page early in the book, but that's what you get with 1st editions. Overall it's very good and the errors are very minor typos as opposed to factual goofs.
11 of 12 people found the following review helpful
Allen and Kennedy (A&K) haven't written your first compiler book. There's nothing about syntax analysis, code generation, instruction scheduling, or intermediate representations. You already know all that part, or you won't get very far in this book. Once you have the basics down, A&K is an irreplaceable reference.
It centers heavily on Fortran - even today, a mainstay of scientific computing and an active area of language development. Today, just as 50 years ago, the language's straightforward structure makes detailed behavioral analysis relatively easy. That's especially true in handling the array computations that soak up so many dozens (as of this writing) of CPU-hours per second on todays largest machines. There's far too much to summarize here, but A&K cover a huge range of processor features, including caches, multiple ALUs, vector units, chaining, and more. C code gets some attention as well, much needed because of the cultural weirdness around array handling in C. In every case, the focus is on the real-world kernels that need the help and on explicit ways of identifying and manipulating those code structures. As a result, the authors disregard the unreal situations that sometimes arise, e.g. in
"while (--n) *a++ = *b++ * *c++;"
Yes, the arrays pointed to by a, b, and c can overlap. But the pointer a can also point to a, b, c, or n, somewhere in its range - and likewise for pointers b and c, or all three. There is essentially no limit to how bad this can get, e.g when n is an alias for a, b, or c. Yes these are rare situations and generally errors - but I've seen on-the-fly code generation in production environments, so even the A&K example isn't as bad as it gets. I admit these to be pathological cases, though, better suited to an 'Obfuscated C' contest than to a compiler textbook.
The real disappointment comes from the section on compilation for Verilog and VHDL, and that disappointment may be a matter of emphasis only. The authors focus heavily on the strangeness of four-valued bits, which exist in Verilog and VHDL simulation, but not in synthesis. I.e., not in what really matters to a deployed application. The real challenge lies in compilation of C or Fortran into gates, a topic that the authors barely skim. That, however, is still a field of research exotica. It should be mentioned in a general book on compilation, as it is here, but awaits a text of its own.
All you processor designers out there should read the title a little differently. You should read this as "Modern Architectures for Optimizing Compilers," but you probably worked that out for yourself. If you have the luxury to define your own memory structure, all that analysis of memory access will give you plenty of ideas for your next ASIP. It will certainly give you lots of ways to quantify the behavior of your target applications, so you'll know just how to get the most MIPS per Mgate, including hard limits on how much hardware paralellism can actually do you any good.
All architects of performance computing systems, hardware or software, need this book. Even application developers can learn better ways to cooperate with the compilers and tools that run their codes. It has my very highest recommendation.
6 of 6 people found the following review helpful
on August 10, 2005
This book is a very thorough look through all the ways you can extract and use parallelism and data dependencies advantageously in an optimized compiler, depending on your target architecture. As one example, this book contains every imaginable way to deal with arrays and loops and the maddeningly complex data dependancies that can result from their various interminglings. The book is refreshingly easy to read and contains pseudo-code and step-by-step examples everywhere you'd want to see them.
1 of 1 people found the following review helpful
on June 5, 2010
Format: HardcoverVerified Purchase
This is the only compiler book i know that performs a comprehensive study on dependences and their applications. It does not provide the theory to learn how a compiler front-end works but rather it focuses on dependence based optimization with applications on parallelism and cache optimization. I consider that this book serves its purpose perfectly