Trade in your item
Get a $2.00
Gift Card.
Have one to sell? Sell on Amazon
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images

Matrix Computations (Johns Hopkins Studies in Mathematical Sciences)(3rd Edition) Paperback – October 15, 1996

ISBN-13: 978-0801854149 ISBN-10: 0801854148 Edition: 3rd

5 New from $69.00 28 Used from $13.03
Amazon Price New from Used from
Hardcover
"Please retry"
$38.99
Paperback
"Please retry"
$69.00 $13.03
Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student


Customers Who Bought This Item Also Bought

NO_CONTENT_IN_FEATURE

Best Books of the Month
Best Books of the Month
Want to know our Editors' picks for the best books of the month? Browse Best Books of the Month, featuring our favorite new books in more than a dozen categories.

Product Details

  • Paperback: 728 pages
  • Publisher: Johns Hopkins University Press; 3rd edition (October 15, 1996)
  • Language: English
  • ISBN-10: 0801854148
  • ISBN-13: 978-0801854149
  • Product Dimensions: 6.1 x 1.3 x 9.2 inches
  • Shipping Weight: 3 pounds
  • Average Customer Review: 4.4 out of 5 stars  See all reviews (35 customer reviews)
  • Amazon Best Sellers Rank: #297,738 in Books (See Top 100 in Books)

Editorial Reviews

Review

'Praise for previous editions:' "A wealth of material, some old and classical, some new and still subject to debate. It will be a valuable reference source for workers in numerical linear algebra as well as a challenge to students."--'SIAM Review' "In purely academic terms the reader with an interest in matrix computations will find this book to be a mine of insight and information, and a provocation to thought; the annotated bibliographies are helpful to those wishing to explore further. One could not ask for more, and the book should be considered a resounding success."--'Bulletin of the Institute of Mathematics and its Applications'

Review

A wealth of material, some old and classical, some new and still subject to debate. It will be a valuable reference source for workers in numerical linear algebra as well as a challenge to students.

(SIAM Review)

In purely academic terms the reader with an interest in matrix computations will find this book to be a mine of insight and information, and a provocation to thought; the annotated bibliographies are helpful to those wishing to explore further. One could not ask for more, and the book should be considered a resounding success.

(Bulletin of the Institute of Mathematics and its Applications)

The authors have rewritten and clarified many of the proofs and derivations from the first edition. They have also added new topics such as Arnoldi iteration, domain decomposition methods, and hyperbolic downdating. Clearly the second edition is an invaluable reference book that should be in every university library. With the new proofs and derivations, it should remain the text of choice for graduate courses in matrix computations

(Image: Bulletin of the International Linear Algebra Society)

Customer Reviews

4.4 out of 5 stars
5 star
22
4 star
6
3 star
6
2 star
0
1 star
1
See all 35 customer reviews
This book looks like it should be both dry and difficult to read.
Alexander C. Zorach
I have found it quite easy to code up various algorithms from the pseudo-code descriptions given in this book.
James Arvo
This book is an invaluable reference for anyone working in matrix computations or linear algebra.
kaplan@vibes.ae.utexas.edu

Most Helpful Customer Reviews

101 of 107 people found the following review helpful By Ali Civril on July 20, 2005
Format: Paperback
This is not a complete review. I just wanted to say something important about the book. I'm a second year computer science PhD student, comfortable with linear algebra. I have been using this book for a couple of months to implement SVD (singular value decomposition) and unfortunately the book turned out to introduce some difficulties.

First of all, it's annoyingly terse! You must be quite comfortable with matrices and all the manipulations etc. to "grasp" the main idea behind an algorithm. I'm talking about truly understanding, not implementing line by line. Most of the times, you will need a paper and pencil to understand what's going on during the execution of an algorithm.

Yet, there's one more important thing: There are typos, and worse: there are mistakes. A specific example:

page 456, Algorithm 8.6.2 The SVD Algorithm

It doesn't talk about how to extract U and V in the decomposition A = U^T*D*V and the last line is incorrect.

diag(I_p, U, I_{q+m-n}) is not an n*n matrix, so you cannot multiply B with this matrix from left. Maybe, there's something I couldn't catch, but this is book's deficiency again.

page 252 Example 5.4.2 about Householder Bidiagonalization

The given matrices do not constitute a correct bidiagonalization, I checked them with Matlab.

and a typo: page 216 5.1.9 Applying Givens rotations, the 4th and 5th line of the algorithm is incorrectly written.

A(1, j) = ...

A(2, j) = ...

should be

A(i, j) = ...

A(k, j) = ...

So, these are the ones I encountered. This book is unmatched in its category in terms of depth and coverage, but it definitely needs a new edition with a more careful treatment.
1 Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
31 of 33 people found the following review helpful By James Arvo on August 1, 2003
Format: Paperback
This is one of the definitive texts on computational linear algebra, or more specifically, on matrix computations. The term "matrix computations" is actually the more apt name because the book focuses on computational issues involving matrices,the currency of linear algebra, rather than on linear algebra in the abstract. As an example of this distinction, the authors develop both "saxpy" (scalar "a" times vector "x" plus vector "y") based algorithms and "gaxpy" (generalized saxpy, where "a" is a matrix) based algorithms, which are organized to exploit very efficient low-level matrix computations. This is an important organizing concept that can lead to more efficient matrix algorithms.
For each important algorithm discussed, the authors provide a concise and rigorous mathematical development followed by crystal clear pseudo-code. The pseudo-code has a Pascal-like syntax, but with embedded Matlab abbreviations that make common low-level matrix operations extremely easy to express. The authors also use indentation rather than tedious BEGIN-END notation, another convention that makes the pseudo-code crisp and easy to understand. I have found it quite easy to code up various algorithms from the pseudo-code descriptions given in this book. The authors cover most of the traditional topics such as Gaussian elimination, matrix factorizations (LU, QR, and SVD), eigenvalue problems (symmetric and unsymmetric), iterative methods, Lanczos method, othogonalization and least squares (both constrained and unconstrained), as well as basic linear algebra and error analysis.
I've use this book extensively during the past ten years. It's an invaluable resource for teaching numerical analysis (which invariably includes matrix computations), and for virtually any research that involves computational linear algebra. If you've got matrices, chances are you will appreciate having this book around.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
19 of 20 people found the following review helpful By Paul Markst on October 2, 2006
Format: Paperback
In certain ways, this book has been both a bane and a boon to my career as a computational mathematician. Way back in 1989, I had the mixed experience of taking a course in Numerical Analysis from Brian Smith at the University of New Mexico. Prof. Smith taught that course exclusively from this book (actually, from the 2nd edition). As a college sophomore, I was terribly out of my depth, but I managed to do okay. Later, I had the opportunity to study under Gene Golub at Stanford, although I was certainly not one of his better students :) Naturally, Prof. Golub also taught pretty much exclusively from this book, by the way, he is a gifted mathematician and wonderful instructor, and a real gentleman. Between these experiences, I'd say I became extremely familar with the contents of this book.

Okay, back to the actual book. If you've got a numerical linear algebra problem to solve, and you don't know which NAG or Matlab routine to use, or simiarly can't figure out why your Numerical Recipes ripped-off code is blowing up on a certain matrix, well, you'll find the reason in this book. The main issue is that you've got to know what you're looking for in order to find it, and that's kind of the kernel of the problem. Some reviewers have stated that the writing is terse, that it is too rigorous, etc. I don't really agree with these reviews, but I agree that it is not for the casual reader who wants a quick answer to the question of "how do I invert this thing". The book spends a lot of time with subtle details such as convergence and stability, and in my experience, these excellent treatments are wasted on most would-be users who are really just looking for a numerical silver bullet, which, of course, just doesn't exist.
Read more ›
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again

Most Recent Customer Reviews


What Other Items Do Customers Buy After Viewing This Item?