Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images
Sell yours for a Gift Card
We'll buy it for $7.57
Learn More
Trade in now
Have one to sell? Sell on Amazon

Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series) Paperback – February 5, 2010

ISBN-13: 978-0123814722 ISBN-10: 0123814723 Edition: 1st
Try the eTextbook for free
$0.00
Buy used
$17.85
Used & new from other sellers Delivery options vary per offer
33 used & new from $13.91
Rent from Amazon Price New from Used from
Kindle
"Please retry"
$17.38
Hardcover
"Please retry"
Paperback, February 5, 2010
"Please retry"
$54.43 $13.91

There is a newer edition of this item:

Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student


Hero Quick Promo
Save up to 90% on Textbooks
Rent textbooks, buy textbooks, or get up to 80% back when you sell us your books. Shop Now

Editorial Reviews

Review

"For those interested in the GPU path to parallel enlightenment, this new book from David Kirk and Wen-mei Hwu is a godsend, as it introduces CUDA (tm), a C-like data parallel language, and Tesla(tm), the architecture of the current generation of NVIDIA GPUs. In addition to explaining the language and the architecture, they define the nature of data parallel problems that run well on the heterogeneous CPU-GPU hardware ... This book is a valuable addition to the recently reinvigorated parallel computing literature." - David Patterson, Director of The Parallel Computing Research Laboratory and the Pardee Professor of Computer Science, U.C. Berkeley. Co-author of Computer Architecture: A Quantitative Approach

"Written by two teaching pioneers, this book is the definitive practical reference on programming massively parallel processors--a true technological gold mine. The hands-on learning included is cutting-edge, yet very readable. This is a most rewarding read for students, engineers, and scientists interested in supercharging computational resources to solve today's and tomorrow's hardest problems." - Nicolas Pinto, MIT, NVIDIA Fellow, 2009

"I have always admired Wen-mei Hwu's and David Kirk's ability to turn complex problems into easy-to-comprehend concepts. They have done it again in this book. This joint venture of a passionate teacher and a GPU evangelizer tackles the trade-off between the simple explanation of the concepts and the in-depth analysis of the programming techniques. This is a great book to learn both massive parallel programming and CUDA." - Mateo Valero, Director, Barcelona Supercomputing Center

"The use of GPUs is having a big impact in scientific computing. David Kirk and Wen-mei Hwu's new book is an important contribution towards educating our students on the ideas and techniques of programming for massively parallel processors." - Mike Giles, Professor of Scientific Computing, University of Oxford

"This book is the most comprehensive and authoritative introduction to GPU computing yet. David Kirk and Wen-mei Hwu are the pioneers in this increasingly important field, and their insights are invaluable and fascinating. This book will be the standard reference for years to come." - Hanspeter Pfister, Harvard University

"This is a vital and much-needed text. GPU programming is growing by leaps and bounds. This new book will be very welcomed and highly useful across inter-disciplinary fields." - Shannon Steinfadt, Kent State University

"GPUs have hundreds of cores capable of delivering transformative performance increases across a wide range of computational challenges. The rise of these multi-core architectures has raised the need to teach advanced programmers a new and essential skill: how to program massively parallel processors." - CNNMoney.com

"This book is a valuable resource for all students from science and engineering disciplines where parallel programming skills are needed to allow solving compute-intensive problems."--BCS: The British Computer Society’s online journal

From the Back Cover

Programming Massively Parallel Processors: A Hands-on Approach shows both student and professional alike the basic concepts of parallel programming and GPU architecture. Various techniques for constructing parallel programs are explored in detail. Case studies demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs.

NO_CONTENT_IN_FEATURE

Shop the new tech.book(store)
New! Introducing the tech.book(store), a hub for Software Developers and Architects, Networking Administrators, TPMs, and other technology professionals to find highly-rated and highly-relevant career resources. Shop books on programming and big data, or read this week's blog posts by authors and thought-leaders in the tech industry. > Shop now

Product Details

  • Series: Applications of GPU Computing Series
  • Paperback: 280 pages
  • Publisher: Morgan Kaufmann; 1 edition (February 5, 2010)
  • Language: English
  • ISBN-10: 0123814723
  • ISBN-13: 978-0123814722
  • Product Dimensions: 7.5 x 0.6 x 9.2 inches
  • Shipping Weight: 1.2 pounds
  • Average Customer Review: 3.9 out of 5 stars  See all reviews (32 customer reviews)
  • Amazon Best Sellers Rank: #662,916 in Books (See Top 100 in Books)

Related Media


Important Information

Ingredients
Example Ingredients

Directions
Example Directions

More About the Author

Discover books, learn about writers, read author blogs, and more.

Customer Reviews

3.9 out of 5 stars

Most Helpful Customer Reviews

25 of 25 people found the following review helpful By Sergei Morozov on March 20, 2010
Format: Paperback Verified Purchase
This book is a much better introduction to programming GPUs via CUDA than CUDA manual, or some presentation floating on the web. It is a little odd in coverage and language. You can tell it is written by two people with different command of English as well as passion. One co-author seems to be trying very hard to be colorful and looking for idiot-proof analogies but is prone to repetition. The other co-author sounds like a dry marketing droid sometimes. There are some mistakes in the codes in the book, but not too many since they don't dwell too long on code listings. In terms of coverage, I wish they'd cover texture memories, profiling tools, examples beyond simple matrix multiplication, and advice on computational thinking for codes with random access patterns. Chapters 6, 8, 9, and 10 are worth reading several times as they are full of practical tricks to use to trade one performance limiter for another in the quest for higher performance.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
15 of 15 people found the following review helpful By Raymond Tay on February 21, 2010
Format: Paperback Verified Purchase
I think this book was written with the beginner in mind - if you're new to CUDA and having issues with understanding NVIDIA's documentation on the subject then this is the book to get. The author(s) took time to clarify and solidify some of the more difficult terms to understand e.g. memory bandwidth utilization, optimizing strategies but there are shortcomings in the book and two i could think of are typos (this really an issue cos it happens to every other book i've read) and the other would be using more examples to solidify concepts and illustrating them.

In a nutshell, a great beginner's book but not a handbook sort of book.
1 Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
15 of 16 people found the following review helpful By Amazon Customer VINE VOICE on February 22, 2010
Format: Paperback Verified Purchase
This book fills a nice gap between the SDK samples, technical specifications, and online course content. If you are just getting started with GPGPU computing, this book leads you smoothly through the computation model, hardware architecture, and the programming model required to take advantage of the hardware.

As others have pointed out, this is not a large book and fairly expensive. But, for the first book on the market it's surprisingly useful, effective, and readable. Definitely recommended for newcomers to the platform. Experienced GPGPU developers should only pick it up as a "hand out" for the people you need to train up, though.
1 Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
11 of 12 people found the following review helpful By John West on February 24, 2010
Format: Paperback
As a beginning text this book has a significant advantage that beginning texts written for MPI, OpenMP, and so on don't have: there are 200 million CUDA-capable GPUs already deployed, and the odds are pretty good that most readers either have, or can readily get access to, a computer on which they can meaningfully learn parallel programming. If you are new to parallel programming and have access to a Tesla GPU, this book is a fine place to start your education. Readers already comfortable with parallel programming will find clear explanations of the Tesla GPU architecture and the performance implications of its hardware features, as well as a solid introduction to the principles of programming in CUDA, though they'll probably do a lot of skimming over the already-familiar basics.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
5 of 5 people found the following review helpful By Tyler Forge on February 4, 2011
Format: Paperback Vine Customer Review of Free Product ( What's this? )
Executive Summary - if you really want to dig into CUDA, go to the "CUDA Zone" on NVidia's web site. Also, this book concentrates on using CUDA on a single GPU.

I think the target audience of this book is an undergraduate taking a CUDA or parallel programming class with the university supplying access to a pre-installed CUDA development system.

This book is very readable (compared to the usual stuff programmers read). I particularly enjoyed the parts about GPU architecture and how various CUDA commands and structures map onto the architecture.

As far as "hands-on" ... ummmm ... no. The code snippets look like they were taken from a linux system (or maybe windows with posix in there) but there isn't any real discussion about setting up a programming environment. To me, a true "hands-on" book should have the reader creating and running a "hello world" app ASAP. This book doesn't deliver that.

An experienced professional (not just an MCSE or script kiddie) might enjoy this book, but not if the goal is to sling code under a deadline. In that case the CUDA zone is your friend.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
15 of 19 people found the following review helpful By S. Yutzy on February 12, 2010
Format: Paperback Verified Purchase
One of the problems with many parallel programming books is that they take too general of an approach, which can leave the reader to figure out how to implement the ideas using the library of his/her choosing. There's certainly a place for such a book in the world, but not if you want to get up and running quickly.

Programming Massively Parallel Processors presents parallel programming from the perspective of someone programming an NVIDIA GPU using their CUDA platform. From matrix multiplication (the "hello world" of the parallel computing world) to fine-tuned optimization, this book walks the reader through step by step not only how to do it, but how to think about it for any arbitrary problem.

The introduction mentions that this book does not require a background in computer architecture or C/C++ programming experience, and while that's largely true, I think it would be extremely helpful to come into a topic like this with at least some exposure in those areas.

Summary: this book is the best reference I've found for learning parallel programming "the CUDA way". Many of the concepts will carry over to other approaches (OpenMP, MPI, etc.), but this is by and large a CUDA book. Highly recommended.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again

Most Recent Customer Reviews


Want to discover more products? Check out these pages to see more: gts 8600, ati gpu, cuda for sale, dual gpu, dual gpu