Proudly - Shop now
Buy new:
$90.00
FREE delivery Thursday, January 2
Ships from: Celikbooks
Sold by: Celikbooks
FREE delivery Thursday, January 2. Details
Or fastest delivery Monday, December 23. Order within 17 hrs 23 mins. Details
Arrives before Christmas
Only 1 left in stock - order soon.
$$90.00 () Includes selected options. Includes initial monthly payment and selected options. Details
Price
Subtotal
$$90.00
Subtotal
Initial payment breakdown
Shipping cost, delivery date, and order total (including tax) shown at checkout.
Ships from
Celikbooks
Ships from
Celikbooks
Sold by
Sold by
Returns
Returnable until Jan 31, 2025
Returnable until Jan 31, 2025
For the 2024 holiday season, eligible items purchased between November 1 and December 31, 2024 can be returned until January 31, 2025.
Returns
Returnable until Jan 31, 2025
For the 2024 holiday season, eligible items purchased between November 1 and December 31, 2024 can be returned until January 31, 2025.
Payment
Secure transaction
Your transaction is secure
We work hard to protect your security and privacy. Our payment security system encrypts your information during transmission. We don’t share your credit card details with third-party sellers, and we don’t sell your information to others. Learn more
Payment
Secure transaction
We work hard to protect your security and privacy. Our payment security system encrypts your information during transmission. We don’t share your credit card details with third-party sellers, and we don’t sell your information to others. Learn more
FREE Returns
All pages and the cover are intact, but shrink wrap, dust covers, or boxed set case may be missing. Pages may include limited notes, highlighting, or minor water damage but the text is readable. Item may be missing bundled media. All pages and the cover are intact, but shrink wrap, dust covers, or boxed set case may be missing. Pages may include limited notes, highlighting, or minor water damage but the text is readable. Item may be missing bundled media. See less
FREE delivery Thursday, December 26
Or Prime members get FREE delivery Saturday, December 21. Order within 4 hrs 23 mins.
Arrives before Christmas
Only 1 left in stock - order soon.
$$90.00 () Includes selected options. Includes initial monthly payment and selected options. Details
Price
Subtotal
$$90.00
Subtotal
Initial payment breakdown
Shipping cost, delivery date, and order total (including tax) shown at checkout.
Access codes and supplements are not guaranteed with used items.
eBook features:
Kindle app logo image

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.

Read instantly on your browser with Kindle for Web.

Using your mobile phone camera - scan the code below and download the Kindle app.

QR code to download the Kindle App

Follow the author

Something went wrong. Please try your request again later.

Programming Massively Parallel Processors: A Hands-on Approach 1st Edition

4.2 4.2 out of 5 stars 47 ratings

There is a newer edition of this item:

Programming Massively Parallel Processors discusses the basic concepts of parallel programming and GPU architecture. Various techniques for constructing parallel programs are explored in detail. Case studies demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs.

This book describes computational thinking techniques that will enable students to think about problems in ways that are amenable to high-performance parallel computing. It utilizes CUDA (Compute Unified Device Architecture), NVIDIA's software development tool created specifically for massively parallel environments. Studies learn how to achieve both high-performance and high-reliability using the CUDA programming model as well as OpenCL.

This book is recommended for advanced students, software engineers, programmers, and hardware engineers.

  • Teaches computational thinking and problem-solving techniques that facilitate high-performance parallel computing.
  • Utilizes CUDA (Compute Unified Device Architecture), NVIDIA's software development tool created specifically for massively parallel environments.
  • Shows you how to achieve both high-performance and high-reliability using the CUDA programming model as well as OpenCL.

Editorial Reviews

Review

"For those interested in the GPU path to parallel enlightenment, this new book from David Kirk and Wen-mei Hwu is a godsend, as it introduces CUDA (tm), a C-like data parallel language, and Tesla(tm), the architecture of the current generation of NVIDIA GPUs. In addition to explaining the language and the architecture, they define the nature of data parallel problems that run well on the heterogeneous CPU-GPU hardware ... This book is a valuable addition to the recently reinvigorated parallel computing literature." --David Patterson, Director of The Parallel Computing Research Laboratory and the Pardee Professor of Computer Science, U.C. Berkeley. Co-author of Computer Architecture: A Quantitative Approach "Written by two teaching pioneers, this book is the definitive practical reference on programming massively parallel processors--a true technological gold mine. The hands-on learning included is cutting-edge, yet very readable. This is a most rewarding read for students, engineers, and scientists interested in supercharging computational resources to solve today's and tomorrow's hardest problems." --Nicolas Pinto, MIT, NVIDIA Fellow, 2009 "I have always admired Wen-mei Hwu's and David Kirk's ability to turn complex problems into easy-to-comprehend concepts. They have done it again in this book. This joint venture of a passionate teacher and a GPU evangelizer tackles the trade-off between the simple explanation of the concepts and the in-depth analysis of the programming techniques. This is a great book to learn both massive parallel programming and CUDA." --Mateo Valero, Director, Barcelona Supercomputing Center "The use of GPUs is having a big impact in scientific computing. David Kirk and Wen-mei Hwu's new book is an important contribution towards educating our students on the ideas and techniques of programming for massively parallel processors." --Mike Giles, Professor of Scientific Computing, University of Oxford "This book is the most comprehensive and authoritative introduction to GPU computing yet. David Kirk and Wen-mei Hwu are the pioneers in this increasingly important field, and their insights are invaluable and fascinating. This book will be the standard reference for years to come." --Hanspeter Pfister, Harvard University "This is a vital and much-needed text. GPU programming is growing by leaps and bounds. This new book will be very welcomed and highly useful across inter-disciplinary fields." --Shannon Steinfadt, Kent State University "GPUs have hundreds of cores capable of delivering transformative performance increases across a wide range of computational challenges. The rise of these multi-core architectures has raised the need to teach advanced programmers a new and essential skill: how to program massively parallel processors." –-CNNMoney.com "This book is a valuable resource for all students from science and engineering disciplines where parallel programming skills are needed to allow solving compute-intensive problems." --BCS: The British Computer Society’s online journal

Review

Learn parallel GPU programming from the first CUDA textbook, expanded from road-tested course material

Product details

  • Publisher ‏ : ‎ Morgan Kaufmann; 1st edition (February 5, 2010)
  • Language ‏ : ‎ English
  • Paperback ‏ : ‎ 280 pages
  • ISBN-10 ‏ : ‎ 0123814723
  • ISBN-13 ‏ : ‎ 978-0123814722
  • Item Weight ‏ : ‎ 1.34 pounds
  • Dimensions ‏ : ‎ 7.5 x 0.75 x 9.25 inches
  • Customer Reviews:
    4.2 4.2 out of 5 stars 47 ratings

About the author

Follow authors to get new release updates, plus improved recommendations.
David Kirk
Brief content visible, double tap to read full content.
Full content visible, double tap to read brief content.

Discover more of the author’s books, see similar authors, read book recommendations and more.

Customer reviews

4.2 out of 5 stars
47 global ratings

Review this product

Share your thoughts with other customers

Customers say

Customers find the book's pacing and information quality good for beginners. They describe it as an excellent guide to learning CUDA programming via CUDA, with clear writing and explanations. However, some customers feel the book is overpriced and a waste of money.

AI-generated from the text of customer reviews

11 customers mention "Pacing"11 positive0 negative

Customers find the book easy to read and understand. It provides a good introduction to programming GPUs via CUDA, with clear concepts and examples. It is recommended for newcomers to the platform and well-written.

"This book is a much better introduction to programming GPUs via CUDA than CUDA manual, or some presentation floating on the web...." Read more

"This book provides a very good introduction into the topic of massive multiprocessing...." Read more

"...im giving it 3 stars because of its quality,its well written,it gets to the point and will give u wat u want, i took one star out because it gave..." Read more

"Extremely Good book, concepts are clearly written out and a lot of stuff is shown...." Read more

5 customers mention "Information quality"5 positive0 negative

Customers find the book informative and useful for learning. They say the concepts are clearly explained and there are many illustrations.

"...Summary: this book is the best reference I've found for learning parallel programming "the CUDA way"...." Read more

"a very interesting book written in such a manner that can readily be understood about this very complicated topic and leaving the reader with many..." Read more

"Extremely Good book, concepts are clearly written out and a lot of stuff is shown...." Read more

"...The case studies included in the book are very informative once the basics have been learned." Read more

4 customers mention "Value for money"0 positive4 negative

Customers are unhappy with the book's value for money. They find it overpriced and a waste of money. The second edition has better information.

"...than what u can get online, and another star out because it is hugely overpriced." Read more

"...As others have pointed out, this is not a large book and fairly expensive...." Read more

"the book is frankly overhyped and for $69b barely 200 pages book is overpriced...." Read more

"...The ebook I bought was a waste of money. There should be some method to allow a student to upgrade the ebook to the necessary edition...." Read more

Top reviews from the United States

  • Reviewed in the United States on February 12, 2010
    One of the problems with many parallel programming books is that they take too general of an approach, which can leave the reader to figure out how to implement the ideas using the library of his/her choosing. There's certainly a place for such a book in the world, but not if you want to get up and running quickly.

    Programming Massively Parallel Processors presents parallel programming from the perspective of someone programming an NVIDIA GPU using their CUDA platform. From matrix multiplication (the "hello world" of the parallel computing world) to fine-tuned optimization, this book walks the reader through step by step not only how to do it, but how to think about it for any arbitrary problem.

    The introduction mentions that this book does not require a background in computer architecture or C/C++ programming experience, and while that's largely true, I think it would be extremely helpful to come into a topic like this with at least some exposure in those areas.

    Summary: this book is the best reference I've found for learning parallel programming "the CUDA way". Many of the concepts will carry over to other approaches (OpenMP, MPI, etc.), but this is by and large a CUDA book. Highly recommended.
    15 people found this helpful
    Report
  • Reviewed in the United States on March 20, 2010
    This book is a much better introduction to programming GPUs via CUDA than CUDA manual, or some presentation floating on the web. It is a little odd in coverage and language. You can tell it is written by two people with different command of English as well as passion. One co-author seems to be trying very hard to be colorful and looking for idiot-proof analogies but is prone to repetition. The other co-author sounds like a dry marketing droid sometimes. There are some mistakes in the codes in the book, but not too many since they don't dwell too long on code listings. In terms of coverage, I wish they'd cover texture memories, profiling tools, examples beyond simple matrix multiplication, and advice on computational thinking for codes with random access patterns. Chapters 6, 8, 9, and 10 are worth reading several times as they are full of practical tricks to use to trade one performance limiter for another in the quest for higher performance.
    24 people found this helpful
    Report
  • Reviewed in the United States on December 26, 2012
    I had used the "CUDA by Example: An Introduction to General-Purpose GPU Programming" book as a primer to CUDA work. Using the information from that book, my first CUDA implementation achieved some improvement in performance, but was not what I had expected. It was only after reading the information in this book that my GPU implementation became what I had hoped it would be. The information allowed me to achieve approximately a 60x improvement in the algorithm, dropping a 7 second implementation in CPU space to less than 1/8 of a second in GPU space.
    One person found this helpful
    Report
  • Reviewed in the United States on February 24, 2014
    This book provides a very good introduction into the topic of massive multiprocessing. I didn't follow the examples because utlimately I haven't been able to use this technique in my projects, but reading the book gave me the feeling I understand the topic and would be able to put it to good use if I decided to actually use it.
  • Reviewed in the United States on August 22, 2010
    i bought this book because i was short on time, i needed to learn CUDA quickly and efficiently, and on that the book delivered perfectly, but now after some experience with CUDA i can see that this book has nothing that u cant get online for free, if u are an experienced programmer and want to get into CUDA and GPU Programming, u dont need this book u just need some time and web tutorials, if u r a novice programmer in general then this book is for you, as it has some chapters on scientific thinking and parallelism in general which are important basis. the openCL chapters helped me a lot though it has some conflicts with the current releases so u ll need to look online a bit as well to get things working.
    im giving it 3 stars because of its quality,its well written,it gets to the point and will give u wat u want, i took one star out because it gave nothing special or nothing more than what u can get online, and another star out because it is hugely overpriced.
    2 people found this helpful
    Report
  • Reviewed in the United States on September 16, 2015
    a very interesting book written in such a manner that can readily be understood about this very complicated topic and leaving the reader with many interesting ideas to think about and many ideas to imagine to create his own computer and go beyond
    One person found this helpful
    Report
  • Reviewed in the United States on February 22, 2010
    I think this book was written with the beginner in mind - if you're new to CUDA and having issues with understanding NVIDIA's documentation on the subject then this is the book to get. The author(s) took time to clarify and solidify some of the more difficult terms to understand e.g. memory bandwidth utilization, optimizing strategies but there are shortcomings in the book and two i could think of are typos (this really an issue cos it happens to every other book i've read) and the other would be using more examples to solidify concepts and illustrating them.

    In a nutshell, a great beginner's book but not a handbook sort of book.
    14 people found this helpful
    Report
  • Reviewed in the United States on February 15, 2014
    Extremely Good book, concepts are clearly written out and a lot of stuff is shown.
    Only drawback is that it is not all that updated any more today.. though it did by the time I purchased and read it.

Top reviews from other countries

Translate all reviews to English
  • Dimo Dimov
    5.0 out of 5 stars Five Stars
    Reviewed in the United Kingdom on February 25, 2018
    Bought for a programmer by trade so guessing it's good
  • Suhel Sayyad
    5.0 out of 5 stars Five Stars
    Reviewed in India on July 16, 2016
    Nice
  • Anonimo
    5.0 out of 5 stars Ottimo
    Reviewed in Italy on November 11, 2013
    Perfetto libro di riferimento per chi vuole approcciarsi alla programmazione parallela completo ed esaustivo. Un classico che va sempre bene (per ora)
  • Dr. Chrilly Donninger
    5.0 out of 5 stars Solide Einführung
    Reviewed in Germany on March 12, 2010
    Das Buch entstand aus mehreren Vorlesungen/Kursen der Autoren zur CUDA. Die Autoren verwenden nicht die übliche Copy&Paste Methode der SDK-Dokumentation. Sie geben dem Leser stattdessen den klassischen Ratschlag RTFM. Sie konzentrieren sich auf die konzeptionelle Seite. An Hand einer Matrizenmultiplikation wird schrittweise gezeigt, wie man die maximale Performance aus einer GPU herausholen kann. Die unmittelbare Transformation des Problems in die CUDA ist sehr einfach. Allerdings wird dieser naive Ansatz durch die Latenz und die Bandweite des globalen Grafikkarten Memory's ausgebremst. Ein klassisches Problem in praktisch allen massiv-parallelen Techniken mit shared memory (bei distributed memory ist dafür die Kommunikation der Flaschenhals). Die Autoren zeigen, wie man durch diverse Tricks den globalen Memory-Zugriff verringert und lokales Memory besser ausnützt. Sie gehen auch detailliert auf den dadurch erreichbaren Speedup ein. Die einzelnen Schritte sind didaktisch sehr gut aufgebaut. Man bekommt ein gutes Gefühl für die Stärken und Schwächen der GPU.

    Ich habe bereits eine HPC (High-Performance-Computing) Anwendung mit FPGAs gebaut. Die FPGA Community machte sich Hoffnungen, in diesen lukrativen Markt am Kuchen mitnaschen zu können. Für rein numerische (floating-point) HPC-Anwendungen sind diese Pläne m.E. mit dem Erscheinen der CUDA gestorben. Man muss auch mit der CUDA einiges Hirnschmalz aufwenden um einen Algorithmus effektiv zu implementieren. Aber im Verhältnis zum Aufwand für eine FPGA-Implementation ist das noch immer nix. Auch preislich liegen zwischen HPC-FPGA Karten und Grafikkarten Welten. Ich kenn auch keine mit diesem Buch vergleichbare Einführung in das HPC-Computing mit FPGAs.
    Die Sache wurde in den Kinderzimmern entschieden.

    Es schwebt mir vor, eine finanzmathematische Monte-Carlo Simulation auf die CUDA zu portieren. Allerdings habe ich das Problem, dass die Simulation auch am Pentium schnell genug ist. Ich muss wohl das Model komplexer machen um mich mit guten Gewissen mit der CUDA spielen zu können. Es war noch nie so leicht eine massiv-parallele Anwendung zu schreiben. Es ist aber auch nicht zu leicht.
  • LinuxTest
    5.0 out of 5 stars CUDA Programming
    Reviewed in Germany on June 6, 2018
    erstklassiges Buch zum Thema Programmierung Massiv Parallel System mit NVIDIA Cuda mit guten bzw. sehr guten Beispielen, absolut empfehlenswert
    Aber nichts für Anfänger