A warning about the title. Some confusion may arise over whether the book is about "computational complexity theory" or the field of "complexity" being pioneered by places like the Santa Fe institute. Without neccessarily pidgeonholing the book into one of these fields, I will warn "complexity" types that it dives heavily into the rigorous field of computational complexity theory (i.e. P/NP, theoretical upper bounds on running times of algorithms ,etc), and re-assure readers from the computational complexity theory camp that the book is more rigorous then the cover, or the title might lead you to believe. My first introduction to this book/subject area was when Lenore Blum (one of the authors) gave a talk at Carnegie Mellon University, mostly following the outlines of the book. I found the talk to be so interesting that I went out and bought the book. While I am not a professional CS theorist, I did attend many of the theory seminars at CMU while I was an undergrad there (you may call me a "hobby theorist"). The talk on this book was one of the few that seemed as novel and mind-blowing to me as my first introduction to theory had been (just in terms of "Wow this is cool!" "Ooh, I never thought of those things in that way", etc). The book is about a novel approach to applying discoveries from complexity theory to the analysis of numerical algorithms. Pure complexity theory quickly becomes unwieldy, as input/output sizes for real-numbers approximated on a turing tape depends on many factors, including the precision of the representation, and the representation method itself. Techniques from applied algorithms (most notably, the "RAM machine" model of the 1970s) have the unfortunate side-effect of being able to solve problems in NP in polynomial time. Blum (and the other authors) take the novel approach of just allowing this side effect, while getting meaningful complexity bounds on real-valued computation, by creating a real-valued analog to the discrete turing machine used in classical complexity theory. Along the way, the authors show that, while this model does allow problems in NP to be solved in polynomial time, it introduces a class, which is not NP, but analogous to it, in the sense that theorems on real-valued algorithms have similar proofs to their discrete counterparts in classical complexity theory. While this is not neccessarily useful to most practical programmers, it is, in addition to being a fascinating and novel way to look at numerical algorithms, also a fascinating subject to think about when looking at the physical world. Among the physical processes that can be looked at from the perspective of this book, are the much-hyped chaotic systems prevalant in the (unfortunately named -- and not very closely connected to "Computational Complexity Theory") field of "complexity" associated with the Santa Fe institute (hence the picture of the Mandelbrot set on the cover, which the authors study as a decidability problem within the framework of the new real-valued Turing machines introduced in this book) My one complaint about the book was that, while the talk Lenore gave at CMU was aimed at an audience more familiar with computational complexity theory then with continuous mathematics, the book goes the other way around, making painstaking explanations of elementary computability / complexity theory, but assuming a strong knowledge of continuous mathematics.