Programming Books C Java PHP Python Learn more Browse Programming Books
Patterns for Parallel Programming (Software Patterns Series) and over one million other books are available for Amazon Kindle. Learn more
Rent
$37.47
  • List Price: $64.99
  • Save: $27.52 (42%)
Rented from RentU
To Rent, select Shipping State from options above
Due Date: Aug 16, 2014
FREE return shipping at the end of the semester. Access codes and supplements are not guaranteed with rentals.
Trade in your item
Get a $15.45
Gift Card.
Have one to sell?
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more

Patterns for Parallel Programming Hardcover

ISBN-13: 078-5342228113 ISBN-10: 0321228111 Edition: 1st

See all 4 formats and editions Hide other formats and editions
Amazon Price New from Used from Collectible from
Kindle
"Please retry"
Hardcover
"Please retry"
$145.95 $28.31

Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student



NO_CONTENT_IN_FEATURE

Shop the new tech.book(store)
New! Introducing the tech.book(store), a hub for Software Developers and Architects, Networking Administrators, TPMs, and other technology professionals to find highly-rated and highly-relevant career resources. Shop books on programming and big data, or read this week's blog posts by authors and thought-leaders in the tech industry. > Shop now

Product Details

  • Hardcover: 384 pages
  • Publisher: Addison-Wesley Professional; 1 edition (September 25, 2004)
  • Language: English
  • ISBN-10: 0321228111
  • ISBN-13: 978-0321228116
  • Product Dimensions: 9.6 x 7.2 x 0.9 inches
  • Shipping Weight: 1.7 pounds
  • Average Customer Review: 3.7 out of 5 stars  See all reviews (7 customer reviews)
  • Amazon Best Sellers Rank: #965,483 in Books (See Top 100 in Books)

Editorial Reviews

About the Author

Timothy G. Mattson is Intel's industry manager for life sciences. His research focuses on technologies that simplify parallel computing for general programmers, with an emphasis on computational biology. He holds a Ph.D. in chemistry from the University of California, Santa Cruz.

Beverly A. Sanders is associate professor at the Department of Computer and Information Science and Engineering, University of Florida, Gainesville. Her research focuses on techniques to help programmers construct high-quality, correct programs, including formal methods, component systems, and design patterns. She holds a Ph.D. in applied mathematics from Harvard University.

Berna L. Massingill is assistant professor in the Department of Computer Science at Trinity University, San Antonio, Texas. Her research interests include parallel and distributed computing, design patterns, and formal methods. She holds a Ph.D. in computer science from the California Institute of Technology.



0321228111AB08232004

Excerpt. © Reprinted by permission. All rights reserved.

"If you build it, they will come."

And so we built them. Multiprocessor workstations, massively parallel supercomputers, a cluster in every department ... and they haven't come. Programmers haven't come to program these wonderful machines. Oh, a few programmers in love with the challenge have shown that most types of problems can be force-fit onto parallel computers, but general programmers, especially professional programmers who "have lives", ignore parallel computers. And they do so at their own peril. Parallel computers are going mainstream. Multithreaded microprocessors, multicore CPUs, multiprocessor PCs, clusters, parallel game consoles ... parallel computers are taking over the world of computing. The computer industry is ready to flood the market with hardware that will only run at full speed with parallel programs. But who will write these programs?

This is an old problem. Even in the early 1980s, when the "killer micros" started their assault on traditional vector supercomputers, we worried endlessly about how to attract normal programmers. We tried everything we could think of: high-level hardware abstractions, implicitly parallel programming languages, parallel language extensions, and portable message-passing libraries. But after many years of hard work, the fact of the matter is that "they" didn't come. The overwhelming majority of programmers will not invest the effort to write parallel software.

A common view is that you can't teach old programmers new tricks, so the problem will not be solved until the old programmers fade away and a new generation takes over.

But we don't buy into that defeatist attitude. Programmers have shown a remarkable ability to adopt new software technologies over the years. Look at how many old Fortran programmers are now writing elegant Java programs with sophisticated object-oriented designs. The problem isn't with old programmers. The problem is with old parallel computing experts and the way they've tried to create a pool of capable parallel programmers.

And that's where this book comes in. We want to capture the essence of how expert parallel programmers think about parallel algorithms and communicate that essential understanding in a way professional programmers can readily master. The technology we've adopted to accomplish this task is a pattern language. We made this choice not because we started the project as devotees of design patterns looking for a new field to conquer, but because patterns have been shown to work in ways that would be applicable in parallel programming. For example, patterns have been very effective in the field of object-oriented design. They have provided a common language experts can use to talk about the elements of design and have been extremely effective at helping programmers master object-oriented design.

This book contains our pattern language for parallel programming. The book opens with a couple of chapters to introduce the key concepts in parallel computing. These chapters focus on the parallel computing concepts and jargon used in the pattern language as opposed to being an exhaustive introduction to the field. The pattern language itself is presented in four parts corresponding to thefour phases of creating a parallel program:

Finding Concurrency . The programmer works in the problem domain to identify the available concurrency and expose it for use in the algorithm design.

Algorithm Structure . The programmer works with high-level structures for organizing a parallel algorithm.

Supporting Structures . We shift from algorithms to source code and consider how the parallel program will be organized and the techniques used to manage shared data.

Implementation Mechanisms . The final step is to look at specific software constructs for implementing a parallel program.

The patterns making up these four design spaces are tightly linked. You start at the top (Finding Concurrency), work through the patterns, and by the time you get to the bottom (Implementation Mechanisms), you will have a detailed design for your parallel program.

If the goal is a parallel program, however, you need more than just a parallel algorithm. You also need a programming environment and a notation for expressing the concurrency within the program's source code. Programmers used to be confronted by a large and confusing array of parallel programming environments. Fortunately, over the years the parallel programming community has converged around three programming environments.

OpenMP. A simple language extension to C, C++, or Fortran to write parallel programs for shared-memory computers.

MPI. A message-passing library used on clusters and other distributed-memory computers.

Java. An object-oriented programming language with language features supporting parallel programming on shared-memory computers and standard class libraries supporting distributed computing.

Many readers will already be familiar with one or more of these programming notations, but for readers completely new to parallel computing, we've included a discussion of these programming environments in the appendixes.

In closing, we have been working for many years on this pattern language. Presenting it as a book so people can start using it is an exciting development for us. But we don't see this as the end of this effort. We expect that others will have their own ideas about new and better patterns for parallel programming. We've assuredly missed some important features that really belong in this pattern language. We embrace change and look forward to engaging with the larger parallel computing community to iterate on this language. Over time, we'll update and improve the pattern language until it truly represents the consensus view of the parallel programming community. Then our real work will begin--using the pattern language to guide the creation of better parallel programming environments and helping people to use these technologies to write parallel software. We won't rest until the day sequential software is rare.



0321228111P08232004

More About the Author

Tim Mattson is a scientist (Ph.D. theoretical chemistry), parallel programmer and writer. He has had the privilege of working on some of the world's most exotic computers (ASCI Red ... the first TFLOP computer in 1996), experimental CPUs (the first TFLOP CPU in 2007), and has worked on several important parallel programming languages (MPI, OpenMP, and OpenCL).

In addition to his technical work, Tim is a well known kayak instructor (ACA level-5 instructor, ACA level 3 instructor trainer) who lectures at venues across the Pacific Northwest on the science and anthropology of kayaking.

Dr. Mattson's research for the last decade has focused on the intersections between cognitive psychology, software engineering, and scalable computing. His ongoing research (in partnership with the ParLab at UC Berkeley) is to develop a large-scale design pattern language that addresses the problem of engineering robust scalable applications. You can follow this work at the project URL: http://parlab.eecs.berkeley.edu/wiki/patterns

Customer Reviews

3.7 out of 5 stars
Share your thoughts with other customers

Most Helpful Customer Reviews

38 of 39 people found the following review helpful By wiredweird HALL OF FAMETOP 500 REVIEWER on October 8, 2006
Format: Hardcover
Parallel programming has been around for years, in many different forms. It has usually been a specialty for supercomputing number crunchers and for the occasional OS geek. Now that traditional, single-processor solutions are hitting the wall, Moore's Law must grow in new directions: multithreaded processos, multi-cores, multi-processors, and wilder exotica. The hardware is entering the market now, and the software community is scrambling to develop the necessry skills for parallel program development. This book gives a fair introduction to a large range of the techniques available.

After getting the reader oriented to the basics of parallel programming, the authors lay out four "design spaces," or families of related patterns. Within each space, the authors present a handful of patterns using a common and reasonably familiar format: name, problem addressed, context, forces acting on the design, the solution, and examples of the pattern's usage. They identify spaces named Finding Concurrency, Algorithm Structure, Supporting Structures, and Implementation Mechanisms. Of course, these topics overlap to some extend, especially in the interplay of algorithm design and explitable parallelism, or in langauge and API primitives that blur support mechanisms available with the implemenation choices available to the programmer. The authors show how the pieces come together in familiar applications, including molecular dynamics and medical imaging applications. Appendices sketch the basic programming constructs available in three of the major parallelism toolkits around: OpenMP, MPI, and Java.

Although valuable, this book has a number of weaknesses. For example, they cite the Cooley-Tukey FFT algorithm as a winning example of "Divide and Conquer.
Read more ›
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
28 of 30 people found the following review helpful By Michael Entin on November 23, 2005
Format: Hardcover
This is an excellent introduction to parallel computing. It presents patterns for discovering what can be parallized, what data structures can be used, how to choose algorithms. Patterns are demonstrated by good examples showing benefits and trade-offs of different solutions. There is also a brief, but very useful introduction to common implementations: OpenMP, MPI, and regular procedural approach demonstrated with Java.

Some caution about what this book is not: this is not a general parallel programming designand patterns book (as I expected from the title). The focus of this book is parallel computing (i.e. scalable _calculations_, often scientific). There is somewhat more to parallel _programming_ than this book covers.

Still, I found this book very good and useful, even though I expected broader coverage.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
5 of 5 people found the following review helpful By Alexandros Gezerlis on September 3, 2010
Format: Hardcover Verified Purchase
"Patterns for Parallel Programming" (PPP) is the outcome of a collaboration between Timothy Mattson of Intel and Beverly Sanders & Berna Massingill (who are academic researchers). It introduces a pattern language for parallel programming, and uses OpenMP, MPI, and Java to flesh out the related patterns.

The Good: this volume discusses both shared-memory and distributed-memory programming, all between one set of covers. It also makes use of a general-purpose programming language and is therefore of interest both to computational scientists who are interested in clusters, and to programmers interested in multiprocessors (these days that covers pretty much everyone). More generally, PPP offers valuable advice to those interested in robust parallel software design. The authors cover a number of topics that are an essential part of parallel-programming lore (e.g. the 1D and 2D block-cyclic array distributions in Chapter 5). In other words, they codify existing knowledge, which is precisely what patterns are supposed to do. To accomplish this, they make effective use of a small number of examples (like molecular dynamics and the Mandelbrot set). That allows them to show a specific problem as approached both from different design spaces, and also from different patterns within one design space. This book follows in the footsteps of the illustrious volume "Design Patterns" by the Gang of Four (GoF). In chapters 3, 4, and 5, Mattson, Sanders, and Massingill introduce a number of patterns using a simplified version of the GoF template. Despite the structural similarities between the two books, PPP is more readable than the GoF volume.
Read more ›
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
11 of 14 people found the following review helpful By J. S. Hardman on October 20, 2007
Format: Hardcover
Normally design pattern books are things that you dip into rather than read end to end, simply because they can be very dry reading. Not this one - as long as you have an interest in parallel programming, reading this end to end should be easy. But that's not to say that you couldn't just dip in to the bits that are most applicable to your work - I'm sure you could.

Many of the examples given of where each pattern is used are in industry sectors other than where I work, but with such good descriptions of each pattern it is easy to picture where they are used other than the examples given and to identify where you have used them yourself without previously knowing that you were using a "named" pattern even if you have been doing it that way for years.

Much of the material in this book is stuff that is hard to find elsewhere. I've heard bits of it at Intel seminars or touched on in Intel books (e.g. the Threading Building Blocks book), but otherwise have not seen this stuff in print, even though many people (possibly unknowingly) are implementing the same ideas in code.

Excellent book. I've knocked one star off though, simply because the authors work on the premise that almost everyone is using one of OpenMP, MPI or Java. In practice, there are still an awful lot of people implementing such systems using C++ with either native threading APIs or third party libraries wrapping those threading APIs.
1 Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
Search
ARRAY(0xa54b45c4)

What Other Items Do Customers Buy After Viewing This Item?