- Hardcover: 656 pages
- Publisher: Pearson; 2 edition (January 26, 2003)
- Language: English
- ISBN-10: 0201648652
- ISBN-13: 978-0201648652
- Product Dimensions: 6.3 x 1.7 x 9.3 inches
- Shipping Weight: 2.2 pounds (View shipping rates and policies)
- Average Customer Review: 12 customer reviews
- Amazon Best Sellers Rank: #295,016 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Introduction to Parallel Computing (2nd Edition) 2nd Edition
Use the Amazon App to scan ISBNs and compare prices.
Frequently bought together
Customers who bought this item also bought
From the Back Cover
Introduction to Parallel Computing, Second Edition
Increasingly, parallel processing is being seen as the only cost-effective method for the fast solution of computationally large and data-intensive problems. The emergence of inexpensive parallel computers such as commodity desktop multiprocessors and clusters of workstations or PCs has made such parallel methods generally applicable, as have software standards for portable parallel programming. This sets the stage for substantial growth in parallel software.
Data-intensive applications such as transaction processing and information retrieval, data mining and analysis and multimedia services have provided a new challenge for the modern generation of parallel platforms. Emerging areas such as computational biology and nanotechnology have implications for algorithms and systems development, while changes in architectures, programming models and applications have implications for how parallel platforms are made available to users in the form of grid-based services.
This book takes into account these new developments as well as covering the more traditional problems addressed by parallel computers. Where possible it employs an architecture-independent view of the underlying platforms and designs algorithms for an abstract model. Message Passing Interface (MPI), POSIX threads and OpenMP have been selected as programming models and the evolving application mix of parallel computing is reflected in various examples throughout the book.
* Provides a complete end-to-end source on almost every aspect of parallel computing (architectures, programming paradigms, algorithms and standards).
* Covers both traditional computer science algorithms (sorting, searching, graph, and dynamic programming algorithms) as well as scientific computing algorithms (matrix computations, FFT).
* Covers MPI, Pthreads and OpenMP, the three most widely used standards for writing portable parallel programs.
* The modular nature of the text makes it suitable for a wide variety of undergraduate and graduate level courses including parallel computing, parallel programming, design and analysis of parallel algorithms and high performance computing.
Ananth Grama is Associate Professor of Computer Sciences at Purdue University, working on various aspects of parallel and distributed systems and applications.
Anshul Gupta is a member of the research staff at the IBM T. J. Watson Research Center. His research areas are parallel algorithms and scientific computing.
George Karypis is Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota, working on parallel algorithm design, graph partitioning, data mining, and bioinformatics.
Vipin Kumar is Professor in the Department of Computer Science and Engineering and the Director of the Army High Performance Computing Research Center at the University of Minnesota. His research interests are in the areas of high performance computing, parallel algorithms for scientific computing problems and data mining.
Top customer reviews
I was hoping, by reading the book I'd learn something essential and got the basic philosophy of high-performance computing/parallel processing. Instead, I got more confused than before reading it! (I used to be real-time software programmer, so the field is not totally new to me). The authors tried to put everything in this small 633-pages book.
Even my professor said it is useless to read the book and referred us to other research papers [Robertazzi's papers], and yes, these IEEE/ACM papers are much clearly understood! I also found that some websites much better explaining the concept. Another book is also I guess better: "Fundamentals of Parallel Processing" by Harry F. Jordan and Gita Alaghband.
Don't waste your money on this book.
It provides a solid foundation for anyone interested in parallel computing on distributed memory architectures. Although there is some material on shared memory machines, this material is fairly limited which might be something the authors should change for a 3rd edition given the times we're living in.
The complaint I would raise is that the book doesn't always feel "clean". It's hard to give a concrete example but sometimes you really have to spend some time to understand where a communication time complexity comes from even though the author's refer to a table of communication time complexities. Why? Because the table is based on that the underlying architecture is a hypercube which isn't really made explicit anywhere (?).
The content is OK, and fairly thorough, but as another reviewer noted, there's considerable handwaving going on in some of the explanations.
Bottom line: a cleaned-up 3rd edition could be a very good textbook. Too bad I'm stuck with the 2nd edition :(
"Foundations of Multithreaded, Parallel, and Distributed Programming" by Gregory Andrews is a much better written book. Unfortunately, Gregory's book does not cover the same content.
The user is left in most cases to derive the bizarre math that is involved through the authors' hand-waiving.
One of my personal favorites is from a formula derivation given on page 340, the sequence follows from the text as:
n^2=K^2tw^2p^2, <--what, did I miss something here?
On top of that there are numerous typos in the sparse visual examples that do exist. Thus it makes it even more confounding to read through.
If you are evaluating the text for a possible parallel computing course. Don't waste your time or money with this text, your students will thank you. If you are student looking to take a class that uses this text...dropping a brick on your foot might be more enjoyable. If you think I'm a disgruntled student trying to seek revenge, I'm not. I did fine in the course, and I just want to make sure that no one else gets blind-sided by the non-sensical garbage that is this text. If there was a negative rating...this would be below 1 star.