- Hardcover: 508 pages
- Publisher: Morgan Kaufmann; 1st edition (June 4, 2001)
- Language: English
- ISBN-10: 1558606440
- ISBN-13: 978-1558606449
- Product Dimensions: 9.5 x 7.6 x 1.2 inches
- Shipping Weight: 2.6 pounds
- Average Customer Review: 2 customer reviews
- Amazon Best Sellers Rank: #4,396,823 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Implicit Parallel Programming in <i>pH</i> Hardcover – June 4, 2001
The Amazon Book Review
Author interviews, book reviews, editors picks, and more. Read it now
Customers who bought this item also bought
Suitable for the mathematically adept researcher or computer science student, Implicit Parallel Programming in pH provides a textbook-style guide to the new pH computer language, a functional language syntactically similar to Haskell but with built-in support for parallel processing.
Besides providing a perspective on the issues of parallel processing, this text is first and foremost an in-depth tutorial to the pH language (which was developed at MIT). While many programmers have managed threads and processes explicitly, pH makes parallel computing automatic. Because it is a functional programming language with limited support for program state, algorithms can be efficiently "parallelized" automatically with little or no programmer intervention.
After introducing the state of parallel programming today, the book delves in with an intensive (and mathematically astute) tutorial for working in pH from the basic syntax of the language to rules of encoding algorithms effectively. The raw syntax of pH resembles Haskell, a well-known functional programming language. To help the beginner, the authors also provide a tutorial for the lambda calculus (which provides the underpinnings of functional programming languages) in an appendix. Samples include problems using linear algebra and chemistry (with paraffin molecules) that help round out concepts for the reader.
Later chapters cover the extensions in pH for sequential programming and sharing data between modules. I-structures, first of all, allow multiple processes to read data in parallel, while M-structures allow read-and-write access to mutable data. (Used sparingly, these techniques supplement the purely functional aspects of pH, which language by default eschews program variables used in traditional sequential programming.)
Short exercises supplement each chapter, and the book concludes with a discussion of the future of pH (currently a research project) and parallel programming in general. The authors envision a day when parallel programming is the norm and sequential programming is the exception. In the meantime, this intelligent and fast-moving computer science title can help put parallel computing (and functional programming) into the hands of any interested computer science student or researcher. --Richard Dragan
- Overview of parallel execution techniques (explicit and implicit parallel programming)
- Functional languages
- Basic tutorial for pH (data types, recursion, block, static scoping, and loops)
- Types and types checking (including static type checking, polymorphism, and operator overloading)
- Rewrite rules, reduction strategies, and parallelism
- Determinacy and termination
- Tuples and algebraic product types (including rewrite rules for algebraic types)
- Lists and algebraic sum types
- Sums of products
- Lists and list comprehensions
- Graphs and binary trees
- Arrays and multidimensional arrays
- Techniques for solving linear equations
- Sample problems from chemistry (modeling paraffins)
- Monadic and parallel I/O
- Explicit sequencing and barriers
- Lists and graphs (mutable synchronized state)
- The future of parallelism and the pH language
- Tutorial for lambda calculus
- Reference for rewrite rules for pH
Recent publicity includes a review Aug./Sept. issue of JOOP Magazine ("best books" w/cover).
Browse award-winning titles. See more
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
For one thing, like APL but with a more normal character set, it unashamedly slings arrays around with merry abandon - and with a much more versatile way of performing element operations, reductions, and more. Because purely functional execution prohibits one computation from affecting another incidentally, synchronization problems in handling different data elements all but vanish. This opens the way to aggressive compiler optimizations not possible when dependencies exist, or might, and encourages parallel execution across the full width of an array or even expression tree. And, because of the authors' long history with Haskell and other experimental languages, they've developed idioms and approaches to standard number-crunching algorithms, like Gaussian elimination or L-R decomposition - computations that absorb countless megawatt-hours of CPU usage every year. They realized that there's no hope for a new language unless it solves genuine, difficult problems.
But, like every functional language, it needs to exist within state-based hardware in a state-based world. So, in the later chapters, the authors introduce mechanisms that let parallel computations share data seamlessly and safely, and perform real-world IO in ways that make sense. Despite the authors' effort these came across somewhat clumsy, certainly not enticing to someone accustomed to imperative processing.
So, this work anticipates the kinds of massive and fine-grained parallelism that exist at the leading edge of current computation. Although parts of it appeal greatly, I find a few major impediments to wide acceptance. The minor one is that the language's scoping rules show no obvious growth path toward the kind of namespaces that keep million-line applications somewhat sane. A bigger problem lies in the circumlocutions sometimes needed to translate imperative implementations of common computations into these terms, but experience and a design pattern community could overcome that. Most seriously, though, it seems to ignore hardware implementation as a matter of policy. If you have experience with the performance consequences of a mis-used memory model, you might be hard-pressed to good ways to adapt pH programming to cache hierarchies, NUMA, GPUs' wide fetch and retire, and other emerging weirdness. According to some, the benefit of parallel processing is not that multiple computations run in parallel, but that multiple memory accesses do. So, I find this a fascinating piece of research, but I can't respond to its call to action.