Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images
Sell yours for a Gift Card
We'll buy it for $13.62
Learn More
Trade in now
Have one to sell? Sell on Amazon

Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers (2nd Edition) Paperback – March 14, 2004

ISBN-13: 978-0131405639 ISBN-10: 0131405632 Edition: 2nd
Rent
FREE Shipping $45.33 - $54.87
Buy used
$65.98
Buy new
$121.08
Used & new from other sellers Delivery options vary per offer
46 used & new from $17.99
Amazon Price New from Used from
Paperback, March 14, 2004
"Please retry"
$121.08
$109.95 $17.99
Free Two-Day Shipping for College Students with Amazon Student Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student


Hero Quick Promo
Save up to 90% on Textbooks
Rent textbooks, buy textbooks, or get up to 80% back when you sell us your books. Shop Now
$121.08 FREE Shipping. Only 1 left in stock (more on the way). Ships from and sold by Amazon.com. Gift-wrap available.

Frequently Bought Together

Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers (2nd Edition) + Introduction to Languages and the Theory of Computation
Price for both: $354.14

Buy the selected items together

Editorial Reviews

From the Inside Flap

Preface

The purpose of this text is to introduce parallel programming techniques. Parallel program-ming uses multiple computers, or computers with multiple internal processors, to solve a problem at a greater computational speed than using a single computer. It also offers the opportunity to tackle larger problems; that is, problems with more computational steps or more memory requirements, the latter because multiple computers and multiprocessor systems often have more total memory than a single computer. In this text, we concentrate upon the use of multiple computers that communicate between themselves by sending messages; hence the term message-passing parallel programming. The computers we use can be different types (PC, SUN, SGI, etc.) but must be interconnected by a network, and a software environment must be present for intercomputer message passing. Suitable networked computers are very widely available as the basic computing platform for students so that acquisition of specially designed multiprocessor systems can usually be avoided. Several software tools are available for message-passing parallel programming, including PVM and several implementations of MPI, which are all freely available. Such software can also be used on specially designed multiprocessor systems should these systems be available for use. So far as practicable, we discuss techniques and applications in a system-independent fashion.

The text is divided into two parts, Part I and Part II. In Part I, the basic techniques of parallel programming are developed. The chapters of Part I cover all the essential aspects, using simple problems to demonstrate techniques. The techniques themselves, however, can be applied to a wide range of problems. Sample code is given usually first as sequential code and then as realistic parallel pseudocode. Often, the underlying algorithm is already parallel in nature and the sequential version has "unnaturally" serialized it using loops. Of course, some algorithms have to be reformulated for efficient parallel solution, and this reformulation may not be immediately apparent. One chapter in Part I introduces a type of parallel programming not centered around message-passing multicomputers, but around specially designed shared memory multiprocessor systems. This chapter describes the use of Pthreads, an IEEE multiprocessor standard system that is widely available and can be used on a single computer.

The prerequisites for studying Part I are knowledge of sequential programming, such as from using the C language and associated data structures. Part I can be studied immediately after basic sequential programming has been mastered. Many assignments here can be attempted without specialized mathematical knowledge. If MPI or PVM is used for the assignments, programs are written in C with message-passing library calls. The descriptions of the specific library calls needed are given in the appendices.

Many parallel computing problems have specially developed algorithms, and in Part II problem-specific algorithms are studied in both non-numeric and numeric domains. For Part II, some mathematical concepts are needed such as matrices. Topics covered in Part II include sorting, matrix multiplication, linear equations, partial differential equations, image processing, and searching and optimization. Image processing is particularly suitable for parallelization and is included as an interesting application with significant potential for projects. The fast Fourier transform is discussed in the context of image processing. This important transform is also used in many other areas, including signal processing and voice recognition.

A large selection of "real-life" problems drawn from practical situations is presented at the end of each chapter. These problems require no specialized mathematical knowledge and are a unique aspect of this text. They develop skills in using parallel programming techniques rather than simply learning to solve specific problems such as sorting numbers or multiplying matrices.

Topics in Part I are suitable as additions to normal sequential programming classes. At the University of North Carolina at Charlotte (UNCC), we introduce our freshmen students to parallel programming in this way. In that context, the text is a supplement to a sequential programming course text. The sequential programming language is assumed to be C or C++. Part I and Part II together is suitable as a more advanced undergraduate parallel programming/computing course, and at UNCC we use the text in that manner.

Full details of the UNCC environment and site-specific details can be found at
cs.uncc/par_prog.
Included at this site are extensive Web pages to help students learn how to compile and run parallel programs. Sample programs are provided. An Instructor's Manual is also available to instructors. Our work on teaching parallel programming is connected to that done by the Regional Training Center for Parallel Processing at North Carolina State University.It is a great pleasure to acknowledge Dr. M. Mulder, program director at the National Science Foundation, for supporting our project. Without his support, we would not be able to pursue the ideas presented in this text. We also wish to thank the graduate students that worked on this project, J. Alley, M. Antonious, M. Buchanan, and G. Robins, and undergraduate students G. Feygin, W. Hasty, C. Beauregard, M. Moore, D. Lowery, K. Patel, Johns Cherian, and especially Uday Kamath. This team helped develop the material and assignments with us. We should like to record our thanks to James Robinson, the departmental system administrator who established our local workstation cluster, without which we would not have been able to conduct the work.

We should also like to thank the many students at UNCC who help us refine the material over the last few years, especially the "teleclasses," in which the materials were classroom tested in a unique setting. These teleclasses are broadcast to several North Carolina universities, including UNC-Asheville, UNC-Greensboro, UNC-Wilmington, and North Carolina State University, in addition to UNCC. We owe a debt of gratitude to many people, among which Professor Wayne Lang at UNC-Asheville and Professor Mladen Vouk of NC State University deserve special mention. Professor Lang truly contributed to the course development in the classroom and Professor Vouk, apart from presenting an expert guest lecture for us, set up an impressive Web page that included "real audio" of our lectures and "automatically turning" slides.A parallel programming course based upon the material in this text was also given at the Universidad Nacional de San Luis in Argentina by kind invitation from Professor Raul Gallard - all these activities helped us in developing this text.
We would like to express our appreciation to Alan Apt and Laura Steele of Prentice Hall, who received our proposal for a textbook and supported us throughout its development. Reviewers provided us with very helpful advice.

Finally, may we ask that you please send comments and corrections to us at
abw@uncc (Barry Wilkinson) or cma@uncc (Michael Allen).

Barry Wilkinson

Michael Allen

University of North Carolina

Charlotte --This text refers to an out of print or unavailable edition of this title.

From the Back Cover

This accessible text covers the techniques of parallel programming in a practical manner that enables readers to write and evaluate their parallel programs. Supported by the National Science Foundation and exhaustively class-tested, it is the first text of its kind that does not require access to a special multiprocessor system, concentrating instead on parallel programs that can be executed on networked computers using freely available parallel software tools.The book covers the timely topic of cluster programming, interesting to many programmers due to the recent availability of low-cost computers. Uses MPI pseudocodes to describe algorithms and allows different programming tools to be implemented, and provides readers with thorough coverage of shared memory programming, including Pthreads and OpenMP.Useful as a professional reference for programmers and system administrators.

NO_CONTENT_IN_FEATURE


Shop the new tech.book(store)
New! Introducing the tech.book(store), a hub for Software Developers and Architects, Networking Administrators, TPMs, and other technology professionals to find highly-rated and highly-relevant career resources. Shop books on programming and big data, or read this week's blog posts by authors and thought-leaders in the tech industry. > Shop now

Product Details

  • Paperback: 496 pages
  • Publisher: Pearson; 2 edition (March 14, 2004)
  • Language: English
  • ISBN-10: 0131405632
  • ISBN-13: 978-0131405639
  • Product Dimensions: 7 x 1.1 x 9 inches
  • Shipping Weight: 1.6 pounds (View shipping rates and policies)
  • Average Customer Review: 3.7 out of 5 stars  See all reviews (11 customer reviews)
  • Amazon Best Sellers Rank: #363,423 in Books (See Top 100 in Books)

More About the Author

Discover books, learn about writers, read author blogs, and more.

Customer Reviews

3.7 out of 5 stars
Share your thoughts with other customers

Most Helpful Customer Reviews

33 of 34 people found the following review helpful By Rajkumar Buyya on April 3, 2000
Format: Paperback
Clusters of Computers have become an appealing platform for cost-effective parallel computing and more particularly so for teaching parallel processing. At Monash University School of Computer Science and Software Engineering, I am teaching "CSC433: Parallel Systems" subject for BSc Honours students. The course covers various communication models and languages for parallel programming. Cluster Computing is one of the focused topics of this course and I found two books that suits well for this course--both published by Prentice Hall in 1999. The first one is: "High Performance Cluster Computing" by R. Buyya (editor) that I use for teaching cluster computer architecture and systems issue. The second one is "Parallel Programming" by B. Wilkinson and M. Allen that I use for teaching programming clusters using message-passing concepts. I found both books complimentary to each other.
B. Wilkinson and M. Allen book discusses key aspects of parallel programming concepts and generic constructs with practical example programs. Each concept has been explained using figures and flow diagrams. The programs illustrated mostly in C using generic parallel programming constructs and popular parallel programming interfaces such as Threads, PVM, and MPI. The authors have also created an excellent web resources home page that offers presentation slides, program source codes, and instructors manual. All these tools make teaching parallel programming course, a pleasing experience. I have no hesitation in recommending this book for anyone serious about teaching parallel programming on clusters.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
4 of 5 people found the following review helpful By A Customer on January 15, 2003
Format: Paperback
The book does quite well in explaining the concepts of parallel computing and programming, and I have very few complaints about anything actually written in the book. (A companion CD with some sample MPI/PVM programs would have been nice.) However, as well as this book is written and organized, it is almost comical to have this size of book (paperback, at that) costing nearly $... If the book would have cost about $.. less and had the companion CD, it would have been five stars.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
3 of 4 people found the following review helpful By A Customer on February 26, 2004
Format: Paperback
The book serves as a good introduction to several advanced computing techniques. It isn't for beginners in computer science or networking, and it isn't worth the list price. Unfortunately, the topic isn't something you are likely to find in a career, so it isn't useful to general computer science students.
It is great as a learning book, in-depth enough that you could use it for on-the-job learning. It covers the things you need to know for real-world use.
I would have given it 5 stars, except it isn't all that great as a reference; you will probably end up using online help for whatever communications package you use. It's the kind of book you read once or twice, then give away to younger collegues.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
1 of 1 people found the following review helpful By A Customer on February 27, 2003
Format: Paperback
This would be a great reference manual, but I am using this test in my parallel processing course and the pseudocode is confusing and the MPI functions are introduced with poor descriptions.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
By Prototype H7K3 on December 21, 2011
Format: Paperback Verified Purchase
Covers from basics of the algorithm to implementation in parallel. And of course there's analysis of cost and efficiency in much easy to understand language. Covers wide variety of parallel techniques.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
Format: Paperback Verified Purchase
This book helped me pass the class, the professor wasn't really giving too many useful lectures so had to teach myself Open MPI and parallel programming paradigms using this book.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again

Set up an Amazon Giveaway

Amazon Giveaway allows you to run promotional giveaways in order to create buzz, reward your audience, and attract new followers and customers. Learn more
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers (2nd Edition)
This item: Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers (2nd Edition)
Price: $121.08
Ships from and sold by Amazon.com

What Other Items Do Customers Buy After Viewing This Item?