Programming Books C Java PHP Python Learn more Browse Programming Books
The Art of Concurrency and over one million other books are available for Amazon Kindle. Learn more

Sorry, this item is not available in
Image not available for
Image not available

To view this video download Flash Player


Sign in to turn on 1-Click ordering
More Buying Choices
Have one to sell? Sell yours here
Start reading The Art of Concurrency on your Kindle in under a minute.

Don't have a Kindle? Get your Kindle here, or download a FREE Kindle Reading App.

The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications [Paperback]

by Clay Breshears
3.7 out of 5 stars  See all reviews (10 customer reviews)

List Price: $44.99
Price: $41.12 & FREE Shipping. Details
You Save: $3.87 (9%)
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o
In Stock.
Ships from and sold by Gift-wrap available.
Want it tomorrow, April 25? Choose One-Day Shipping at checkout. Details
Free Two-Day Shipping for College Students with Amazon Student


Amazon Price New from Used from
Kindle Edition $19.79  
Paperback $41.12  
Shop the new
New! Introducing the, a hub for Software Developers and Architects, Networking Administrators, TPMs, and other technology professionals to find highly-rated and highly-relevant career resources. Shop books on programming and big data, or read this week's blog posts by authors and thought-leaders in the tech industry. > Shop now

Book Description

May 22, 2009 0596521537 978-0596521530 1

If you're looking to take full advantage of multi-core processors with concurrent programming, this practical book provides the knowledge and hands-on experience you need. The Art of Concurrency is one of the few resources to focus on implementing algorithms in the shared-memory model of multi-core processors, rather than just theoretical models or distributed-memory architectures. The book provides detailed explanations and usable samples to help you transform algorithms from serial to parallel code, along with advice and analysis for avoiding mistakes that programmers typically make when first attempting these computations.

Written by an Intel engineer with over two decades of parallel and concurrent programming experience, this book will help you:

  • Understand parallelism and concurrency
  • Explore differences between programming for shared-memory and distributed-memory
  • Learn guidelines for designing multithreaded applications, including testing and tuning
  • Discover how to make best use of different threading libraries, including Windows threads, POSIX threads, OpenMP, and Intel Threading Building Blocks
  • Explore how to implement concurrent algorithms that involve sorting, searching, graphs, and other practical computations

The Art of Concurrency shows you how to keep algorithms scalable to take advantage of new processors with even more cores. For developing parallel code algorithms for concurrent programming, this book is a must.

Frequently Bought Together

The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications + Intel Threading Building Blocks: Outfitting C++ for Multi-core Processor Parallelism + The Art of Multiprocessor Programming, Revised Reprint
Price for all three: $126.01

Buy the selected items together

Editorial Reviews

About the Author

Clay Breshears has been with Intel since September 2000. He started as a Senior Parallel Application Engineer at the Intel Parallel Applications Center in Champaign, IL, implementing multithreaded and distributed solutions in customer applications. Clay is currently a Course Architect for the Intel Software College, specializing in multi-core and multithreaded programming and training. Before joining Intel, Clay was a Research Scientist at Rice University helping Department of Defense researchers make best use of the latest High Performance Computing (HPC) platforms and resources.

Clay received his Ph.D. in Computer Science from the University of Tennessee, Knoxville, in 1996, but has been involved with parallel computation and programming for over twenty years; six of those years were spent in academia at Eastern Washington University and The University of Southern Mississippi.

Product Details

  • Paperback: 304 pages
  • Publisher: O'Reilly Media; 1 edition (May 22, 2009)
  • Language: English
  • ISBN-10: 0596521537
  • ISBN-13: 978-0596521530
  • Product Dimensions: 9.1 x 7 x 0.8 inches
  • Shipping Weight: 1.2 pounds (View shipping rates and policies)
  • Average Customer Review: 3.7 out of 5 stars  See all reviews (10 customer reviews)
  • Amazon Best Sellers Rank: #903,767 in Books (See Top 100 in Books)

More About the Author

Discover books, learn about writers, read author blogs, and more.

Customer Reviews

3.7 out of 5 stars
3.7 out of 5 stars
Share your thoughts with other customers
Most Helpful Customer Reviews
39 of 42 people found the following review helpful
This book is kind of a dull read for (in my opinion) interesting material. The writing style is informal, but self-importantly so (lots of "I did this" and "I've said this before"). Even discounting that, the writer cannot make the subject very interesting. Partly because he eschews figures or flowcharts or itemized steps for walls of text, partly because the writing itself is disjointed and not very good. Remember, informal != good.

Plus, the code is sloppy. Almost everywhere a main routine has a loop pthread_create'ing new threads, "new" is used to allocate and free is used to deallocate a pointer. This is the most egregious one; one can take issue with a lot of others (for example, why is the code C++ if 99% of it is C? Why is the pointer allocated in the main thread but freed in the worker thread?).

The thing I liked about the book is that it covers a lot of relevant topics, including modern ones such as MapReduce. It should reward someone with the patience to overlook its flaws.
Comment | 
Was this review helpful to you?
24 of 25 people found the following review helpful
If you are a relative beginner, and not dealing with inherited code, then this book provides a happily patronising and sloppily-coded "taster" introduction to writing algorithms in OpenMP or Intel Thread Building Blocks, with a little coverage of pThreads, and a very small amount of Windows Threading. Java, Erlang, and CUDA/OpenCL are completely absent. Compiler support was sparse and the C++0x standard not ratified at time of writing, so no examples are given of the way lambda functions make it easier to write and use TBB algorithms.

Thread-local storage is mentioned, but no example code is given, so there is insufficient information to actually use it -- the same is true for many other indexed items. (TLS has four entries in the index, but the useful paragraph on p43 is not in the index at all).

If you already know enough to use the libraries it covers, then the only useful thing from this book is going to be the hints, tips and experiences. Unlike a previous good reviewer, these are what I think should be the golden core of the book (there are plenty of better books on parallel algorithms). But like the previous reviewer, I was left extremely disappointed by an opportunity lost. With hints and tips associated with arbitrary algorithms and scorecards throughout the book, most of their potential benefit is lost.

Debugging tools are summarily dealt with on pages 258 and 259, and profiling tools take up the following three pages. Verification and correctness do not make it to the index, although there is a single entry for "testing for correctness", which refers to Design Step 3 on page 10, which says that you should do this, but simply refers you to the six pages of the tools chapter above.
Read more ›
Comment | 
Was this review helpful to you?
6 of 6 people found the following review helpful
2.0 out of 5 stars Not a true book January 1, 2012
By Sanks
Sloppy piece of work. Content overview is good but the code is sloppy and writing style is bad. I would not recommend this.
Comment | 
Was this review helpful to you?
20 of 27 people found the following review helpful
5.0 out of 5 stars Solid book on concurrent programming May 24, 2009
This is a new book on concurrent programming that splits the difference between academic tomes on the subject and cookbook code dump type texts.

The author assumes that readers have some basic knowledge of data structures and algorithms and asymptotic efficiency of algorithms (Big-Oh notation) that is typically taught in an undergraduate computer science curriculum. Readers familiar with Introduction to Algorithms should do the trick. He also assumes that the reader is an experienced C programmer (he uses C throughout the book) and knows something about OpenMP, Intel Threading Building Blocks, POSIX threads, or Windows Threads libraries and has a good idea of which of these tools will be used in their own situations. The author does not focus on using one programming paradigm here, since, for the most part, the functionality of these overlap. Instead he presents a variety of threading implementations across a wide spectrum of algorithms that are featured in the latter portion of the book.

The current product description does not show the table of contents, so I do that next:

Chapter 1, "Want to go faster" anticipates and answers some of the questions you might have about concurrent programming. This chapter explains the differences between parallel and concurrent, and describes the four-step threading methodology. The chapter ends with some background on concurrent programming and some of the differences and similarities between distributed-memory and shared-memory programming and execution models.
Read more ›
Comment | 
Was this review helpful to you?
1 of 2 people found the following review helpful
This book provides an introduction to concurrency of threads and tasks with shared memory, not SIMD concurrency (such as with a vector unit or GPU) or message passing with distributed memory (such as with a cluster). The first half of the book covers the concepts involved in multithreaded programming and how to parallelize algorithms. The second half covers particular parallel algorithms with implementations in C++.

The first several chapters were confusing because they made no reference to the memory model, which is odd for a book that covers shared-memory concurrency. In particular, the book assumed that when one thread writes a global variable, that all other threads would see the new value immediately. In reality, compilers can reorder results and store results in registers unless variables are marked as volatile or there is an explicit memory barrier.

The book assumes the reader has experience with a task or thread library. It does introduce OpenMP, Intel Threaded Building Blocks, and Pthreads, but the book could offer more explanation for readers not experienced with these implementations of concurrency. The book does not use C++ lambda expressions, which are useful when using Intel TBB, and it does not cover Java concurrency.

The C++ code listings in the book are quite helpful and can be understood by any C programmer, but they could be better. One nitpick in particular is that the code for Quicksort always picks the leftmost element as the pivot, which will result in a stack overflow when sorting a large, nearly sorted list. The book provides no coding exercises, which I often find helpful when learning a new language or library.
Comment | 
Was this review helpful to you?
Most Recent Customer Reviews
Search Customer Reviews
Only search this product's reviews


There are no discussions about this product yet.
Be the first to discuss this product with the community.
Start a new discussion
First post:
Prompts for sign-in

Look for Similar Items by Category

Want to discover more products? You may find many from pragma omp parallel shopping list.