- Paperback: 231 pages
- Publisher: Morgan Kaufmann; 1 edition (October 16, 2000)
- Language: English
- ISBN-10: 1558606718
- ISBN-13: 978-1558606715
- Product Dimensions: 7.3 x 0.6 x 9.3 inches
- Shipping Weight: 14.9 ounces (View shipping rates and policies)
- Average Customer Review: 4.4 out of 5 stars See all reviews (6 customer reviews)
- Amazon Best Sellers Rank: #1,587,977 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Parallel Programming in OpenMP 1st Edition
Use the Amazon App to scan ISBNs and compare prices.
Frequently bought together
Customers who bought this item also bought
The OpenMP standard allows programmers to take advantage of new shared-memory multiprocessor systems from vendors like Compaq, Sun, HP, and SGI. Aimed at the working researcher or scientific C/C++ or Fortran programmer, Parallel Programming in OpenMP both explains what this standard is and how to use it to create software that takes full advantage of parallel computing.
At its heart, OpenMP is remarkably simple. By adding a handful of compiler directives (or pragmas) in Fortran or C/C++, plus a few optional library calls, programmers can "parallelize" existing software without completely rewriting it. This book starts with simple examples of how to parallelize "loops"--iterative code that in scientific software might work with very large arrays. Sample code relies primarily on Fortran (undoubtedly the language of choice for high-end numerical software) with descriptions of the equivalent calls and strategies in C/C++. Each sample is thoroughly explained, and though the style in this book is occasionally dense, it does manage to give plenty of practical advice on how to make code run in parallel efficiently. The techniques explored include how to tweak the default parallelized directives for specific situations, how to use parallel regions (beyond simple loops), and the dos and don'ts of effective synchronization (with critical sections and barriers). The book finishes up with some excellent advice for how to cooperate with the cache mechanisms of today's OpenMP-compliant systems.
Overall, Parallel Programming in OpenMP introduces the competent research programmer to a new vocabulary of idioms and techniques for parallelizing software using OpenMP. Of course, this standard will continue to be used primarily for academic or research computing, but now that OpenMP machines by major commercial vendors are available, even business users can benefit from this technology--for high-end forecasting and modeling, for instance. This book fills a useful niche by describing this powerful new development in parallel computing. --Richard Dragan
- Overview of the OpenMP programming standard for shared-memory multiprocessors
- Description of OpenMP parallel hardware
- OpenMP directives for Fortran and pragmas for C/C++
- Parallelizing simple loops
- parallel do / parallel for directives
- Shared and private scoping for thread variables
- reduction operations
- Data dependencies and how to remove them
- OpenMP performance issues (sufficient work, balancing the load in loops, scheduling options)
- Parallel regions
- How to parallelize arbitrary blocks of code (master and slave threads, threadprivate directives and the copyin clause)
- Parallel task queues
- Dividing work based on thread numbers
- Noniterative work sharing
- Restrictions on work-sharing
- Nested parallel regions
- Controlling parallelism in OpenMP, including controlling the number of threads, dynamic threads, and OpenMP library calls for threads
- OpenMP synchronization
- Avoiding data races
- Critical section directives (named and nested critical sections and the atomic directive
- Runtime OpenMP library lock routines
- Event synchronization (barrier directives and ordered sections)
- Custom synchronization, including the flush directive
- Programming tips for synchronization
- Performance issues with OpenMP
- Amdahl's Law
- Load balancing for parallelized code
- Hints for writing parallelized code that fits into processor caches
- Avoiding false sharing
- Synchronization hints
- Performance issues for bus-based and Non-Uniform Memory Access (NUMA) machines
- OpenMP quick reference
"This book will provide a valuable resource for the OpenMP community."
Timothy G. Mattson, Intel Corporation
"This book has an important role to play in the HPC community-both for introducing practicing professionals to OpenMP and for educating students and professionals about parallel programming. I'm happy to see that the authors have put together such a complete OpenMP presentation."
Mary E. Zozel, Lawrence Livermore National Laboratory
Browse award-winning titles. See more
If you are a seller for this product, would you like to suggest updates through seller support?
Top Customer Reviews
After two introductory chapters, the authors introduce OpenMP in three stages: loop parallelism, general parallelism, and synchronization, roughly in order of increasing complexity. The authors present the necessary OpenMP pragmas and APIs at each step, showing how they address the immediate problems. An appendix summarizes the pragmas and APIs, in both their C/C++ and Fortran forms. OO C++ programmers may be dismayed by the amount of attention paid to an un-cool language like Fortran, but need to realize that it's still the lingua franca of performance programming. And, in fairness, the authors spend equal time on C++ idiosyncrasies, such as constructor invocations for variables that are silently replicated in each of the parallel threads.
If you've ever done performance programming, you're groaningly aware that getting the parallelism right is actually the easy part. The tricky parts come in breaking dependencies, in scheduling, in ensuring spatial and temporal locality, and in dealing with cache coherency issues of multiprocessors. The authors give great introductions to all of the basics. This includes a patient description of how caches actually work, since there's a new crop of beginners every day. The authors describe performance analysis tools, but only briefly. The tools differ so much between vendors and between one rev and the next, that any detailed description would be useless to most readers immediately and obsolete for all readers very soon.
This won't turn a beginner into the guru of performance computing. It will, however, establish a working competence in one popular parallelization tool, OpenMP, and in the computing technologies that affect parallel performance.