 Amazon Business : For businessonly pricing, quantity discounts and FREE Shipping. Register a free business account
Matrix Analysis (Graduate Texts in Mathematics (169)) 1997th Edition
by
Rajendra Bhatia
(Author)
Rajendra Bhatia
(Author)
Find all the books, read about the author, and more.
See search results for this author
Are you an author?
Learn about Author Central

ISBN13:
9780387948461
ISBN10:
0387948465
Why is ISBN important?
ISBN
Scan an ISBN with your phone
Use the Amazon App to scan ISBNs and compare prices.
This barcode number lets you verify that you're getting exactly the right version or edition of a book. The 13digit and 10digit formats both work.
Use the Amazon App to scan ISBNs and compare prices.
Have one to sell?
Add to book club
Loading your book clubs
There was a problem loading your book clubs. Please try again.
Not in a club?
Learn more
Join or create book clubs
Choose books together
Track your books
Bring your club to Amazon Book Clubs, start a new book club and invite your friends to join, or find a club that’s right for you for free.
Buy new:
$69.80
Only 1 left in stock (more on the way).
Ships from and sold by Amazon.com.
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer  no Kindle device required.
Download to your computer

Kindle Cloud Reader

Frequently bought together
Customers who viewed this item also viewed
Page 1 of 1 Start overPage 1 of 1
 Matrix AnalysisRoger A. HornPaperback
 Matrix Computations (Johns Hopkins Studies in the Mathematical Sciences)Hardcover
 Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics) by Nicholas J. Higham (20080326)Hardcover
 Introduction to Matrix Analysis and Applications (Universitext)Paperback
 An Introduction to Statistical Learning: with Applications in R (Springer Texts in Statistics)Hardcover
 Matrix analysis and applied linear algebraTextbook Binding
Customers who bought this item also bought
Page 1 of 1 Start overPage 1 of 1
 HighDimensional Probability: An Introduction with Applications in Data Science (Cambridge Series in Statistical and Probabilistic Mathematics, Series Number 47)Hardcover
 The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics)Trevor HastieHardcover
 Reinforcement Learning and Optimal ControlDimitri BertsekasHardcover
 HighDimensional Statistics (A NonAsymptotic Viewpoint)Hardcover
 Convex OptimizationStephen BoydHardcover
 Lectures on Convex Optimization (Springer Optimization and Its Applications, 137)Hardcover
Special offers and product promotions
Editorial Reviews
Review
R. Bhatia
Matrix Analysis
"A highly readable and attractive account of the subject. The book is a must for anyone working in matrix analysis; it can be recommended to graduate students as well as to specialists."―ZENTRALBLATT MATH
"There is an ample selection of exercises carefully positioned throughout the text. In addition each chapter includes problems of varying difficulty in which themes from the main text are extended."―MATHEMATICAL REVIEWS
From the Back Cover
The aim of this book is to present a substantial part of matrix analysis that is functional analytic in spirit. Much of this will be of interest to graduate students and research workers in operator theory, operator algebras, mathematical physics, and numerical analysis. The book can be used as a basic text for graduate courses on advanced linear algebra and matrix analysis. It can also be used as supplementary text for courses in operator theory and numerical analysis. Among topics covered are the theory of majorization, variational principles of eigenvalues, operator monotone and convex functions, perturbation of matrix functions, and matrix inequalities. Much of this is presented for the first time in a unified way in a textbook. The reader will learn several powerful methods and techniques of wide applicability, and see connections with other areas of mathematics. A large selection of matrix inequalities will make this book a valuable reference for students and researchers who are working in numerical analysis, mathematical physics and operator theory.
Product details
 Publisher : Springer; 1997th edition (November 15, 1996)
 Language : English
 Hardcover : 360 pages
 ISBN10 : 0387948465
 ISBN13 : 9780387948461
 Item Weight : 1.64 pounds
 Dimensions : 6.14 x 0.81 x 9.21 inches

Best Sellers Rank:
#508,197 in Books (See Top 100 in Books)
 #23 in Mathematical Matrices
 #61 in Number Systems (Books)
 #144 in Linear Algebra (Books)
 Customer Reviews:
Tell the Publisher!
I'd like to read this book on Kindle
Don't have a Kindle? Get your Kindle here, or download a FREE Kindle Reading App.
I'd like to read this book on Kindle
Don't have a Kindle? Get your Kindle here, or download a FREE Kindle Reading App.
Fearless and inspiring
Discover Black voices on Audible. Learn more
Customer reviews
4.8 out of 5 stars
4.8 out of 5
6 global ratings
How are ratings calculated?
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzes reviews to verify trustworthiness.
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Reviewed in the United States on September 18, 2007
Verified Purchase
This book is fascinating! Bhatia has made an excellent selection of topics. It is frequently cited in the quantum information literature, and I assume also in the literature of other research subjects. This is on matrix analysis, and it has the flavor of finitedimensional functional analysis. It is concise and has a very interesting selection of topics.
I have a few suggested tweaks for future(?) editions or classroom discussions:
Remarks on chapter 2:
The presentation at the beginning of chapter 2 would be more motivated if one operationally defines x to majorize y iff y = Ax for some doublystochastic matrix A. Bhatia uses an algebraic definition and then proves the equivalence after six pages later. Immediately giving an unmotivated algebraic condition robs the reader of the chance to discover or prove the condition for himself.
There is a very confusing typo in the proof of theorem II.2.8. The statement
"Let r be the smallest of the positive coordinates of x"
should read
"Let r be the smallest of the positive coordinates of y".
Another small remark: Just after the statement of Corollary II.3.4 Bhatia states that "one part of Theorem II.3.1 and Exercise II.3.2 is subsumed by [Corollary II.3.4]." In fact, they are equivalent! That II.3.1 and II.3.2 imply II.3.4 follows immediately from the following
Observation: If f:R>R and g:R>R are convex and f is monotonicallyincreasing then f composed with g is convex.
Notes on chapter 4:
It would be nice to have the isomorphism between balls and norms presented, perhaps just as an exercise. Then the reader can get a visual mental picture of the various conditions for a norm to be a symmetric gauge function. It might also be nice to move theorem IV.2.1 to the very beginning of that chapter, so that the reader sees the point of section IV.1 immediately.
A small remark is that the proof of Theorem IV.1.8 is made slightly more transparent by the observation that by Theorem IV.1.6 on has
[Phi(x^p)]^(1/p) = Sup Phi(xz),
where the supremum is over z such that (Phi[z^q])^(1/q)=1. (The Sup is attained when x^p = z^q.) Then Theorem IV.1.8 follows immediately from the triangle inequality and subadditivity of suprema:
[Phi(x+y)^p]^1/p = Sup Phi((x+y)z) <= Sup [Phi(xz)+Phi(yz)] <= Sup Phi(xz) + Sup Phi(yz)
Chapter 5:
Chapter 5 covers some of the most interesting and surprising mathematics I have ever seen.
Remarks:
1. All the regularity needed to classify the matrix monotone functions is already present in the case of 2 x 2 matrix monotone functions. Perhaps concretely classifying them would modularize the parts of a complicated proof, allowing some separation between discussion of operator convexity and monotonicity. (Let f:R>R be nonconstant. Then f is 2x2 matrix monotone iff f is differentiable with df/dt>0 everywhere and (df/dt)^(1/2) concave. Furthermore, the first two estimates of Lemma v.4.1 continue to hold for 2x2 matrix monotone functions.)
2. Theorem V.3.3 has somewhat restrictive assumptions: Let f:R>R be extended to a map on self adjoint matrices using the functional calculus. Then all that is needed to differentiate f(A+tH) at t=0, where A and H are selfadjoint and t is a real parameter, is for f to be differentiable on the spectrum of A. (f could be discontinuous except on spec(A), for example.)
3. I would have liked to have the definition the "second divided difference" of f at the points {a,b,c} to be "the highestdegreecoefficient of the atmost quadratic polynomial P that interpolates f on the set {a,b,c}. When a=b then one choses P such that P'(a)=f'(a) as well. When a=b=c then one also takes P''(a)=f''(a)." This is the point of exercise V.3.7, but it makes for easier reading for the definition to be conceptual and let the exercise be to work out the algebraic consequences.
Furthermore, if desired one can actually avoid this calculation and proceed to the proof of Theorem V.3.10. (Just replace f by interpolating polynomials and evaluate everything by by algebra. It has the flavor of Feynman diagrams.)
4. In Hansen and Pedersen "Jensen's operator inequality," Bulletin of the London Mathematial Society," 35 pp. 553564 (2003); arXiv:math.OA/0204049 (2002), the original authors of the noncommutative jensen inequality state
"With hindsight we must admit that we unfortunately proved and used [a different formulation of the noncommutatitve Jensen's inequality]. However, this necessitated the further conditions that 0 is an element of I and that f(0) < 0, conditions that have haunted the theory since then."
Bhatia's presentation is somewhat outofdate because it does not include the more uptodate Jensen's inequality from the more recent work cited above. (Note that the more recent paper occured after the current 1996 edition of Bhatia was published.)
Furthermore, in the same paper, Hansen and Pedersen also introduce a nice version Jensen's trace inequality. It is the same as their sharper form of Jensen's operator inequality, except that both sides have a trace in front and that the operator convex function f is replaced by an arbitrary (scalar) convex function f:R>R. (f acts on matrices using the functional calculus). In particular, the trace inequality is much simpler to prove and more widely applicable although less powerful.
5. It would be nice in future editions(?) to include a reference to Petz and Nielsen's nice little proof of strong subadditivity of the von Neuman entropy.
Chapter 7:
I would have liked to see section 7.1 replaced with the following theorem statement (very similar to what's already in 7.1), and see it proved without chosing an arbitrary basis. (Using an arbitrary basis makes Bhatia's proof of the CS theorem a bit messy, but a reformulation avoids that.)
Definition: A unitary map U on a Hilbert space is a planer rotation iff
U restricts to the identity on a subspace P of codimension 2, and P is unitarily equivalent to
cos(t) sin(t)
sin(t) cos(t)
on P.
Theorem: Let E and F be distinct subspaces of the Hilbert space H, with dim E = dim F. Then there exists a set of planer rotations {R_i} with the properties that
1. The twodimensional rotation subspaces of the R_i are mutually orthogonal and intersect E and F. (In particular, the R_i commute.)
2. Each R rotates by an angle theta in (0, pi/2].
3. E is rotated onto F by the product of the R_i.
Furthermore, the collection of angles theta_i is uniquely determined by E and F, including multiplicity. If the angles theta_i are distinct and strictly less than pi/2 then the corresponding R_i are also uniquely determined.
Further remark on chapter VII: There is an error on page 223. The author states "we have a bijection psi from H tensor H onto L(H), that is linear in the first variable and conjugate linear in the second variable".
This is impossible, since (lamda v) tensor w = w tensor (lamda w). In particular, any map that is linear in the first variable is necessarily linear in the second variable. The practice of introducing a map from H tensor H to L(H) is a cause of much ugly basisinvariancebreaking in quantum information theory and consequently should be discouraged.
I have a few suggested tweaks for future(?) editions or classroom discussions:
Remarks on chapter 2:
The presentation at the beginning of chapter 2 would be more motivated if one operationally defines x to majorize y iff y = Ax for some doublystochastic matrix A. Bhatia uses an algebraic definition and then proves the equivalence after six pages later. Immediately giving an unmotivated algebraic condition robs the reader of the chance to discover or prove the condition for himself.
There is a very confusing typo in the proof of theorem II.2.8. The statement
"Let r be the smallest of the positive coordinates of x"
should read
"Let r be the smallest of the positive coordinates of y".
Another small remark: Just after the statement of Corollary II.3.4 Bhatia states that "one part of Theorem II.3.1 and Exercise II.3.2 is subsumed by [Corollary II.3.4]." In fact, they are equivalent! That II.3.1 and II.3.2 imply II.3.4 follows immediately from the following
Observation: If f:R>R and g:R>R are convex and f is monotonicallyincreasing then f composed with g is convex.
Notes on chapter 4:
It would be nice to have the isomorphism between balls and norms presented, perhaps just as an exercise. Then the reader can get a visual mental picture of the various conditions for a norm to be a symmetric gauge function. It might also be nice to move theorem IV.2.1 to the very beginning of that chapter, so that the reader sees the point of section IV.1 immediately.
A small remark is that the proof of Theorem IV.1.8 is made slightly more transparent by the observation that by Theorem IV.1.6 on has
[Phi(x^p)]^(1/p) = Sup Phi(xz),
where the supremum is over z such that (Phi[z^q])^(1/q)=1. (The Sup is attained when x^p = z^q.) Then Theorem IV.1.8 follows immediately from the triangle inequality and subadditivity of suprema:
[Phi(x+y)^p]^1/p = Sup Phi((x+y)z) <= Sup [Phi(xz)+Phi(yz)] <= Sup Phi(xz) + Sup Phi(yz)
Chapter 5:
Chapter 5 covers some of the most interesting and surprising mathematics I have ever seen.
Remarks:
1. All the regularity needed to classify the matrix monotone functions is already present in the case of 2 x 2 matrix monotone functions. Perhaps concretely classifying them would modularize the parts of a complicated proof, allowing some separation between discussion of operator convexity and monotonicity. (Let f:R>R be nonconstant. Then f is 2x2 matrix monotone iff f is differentiable with df/dt>0 everywhere and (df/dt)^(1/2) concave. Furthermore, the first two estimates of Lemma v.4.1 continue to hold for 2x2 matrix monotone functions.)
2. Theorem V.3.3 has somewhat restrictive assumptions: Let f:R>R be extended to a map on self adjoint matrices using the functional calculus. Then all that is needed to differentiate f(A+tH) at t=0, where A and H are selfadjoint and t is a real parameter, is for f to be differentiable on the spectrum of A. (f could be discontinuous except on spec(A), for example.)
3. I would have liked to have the definition the "second divided difference" of f at the points {a,b,c} to be "the highestdegreecoefficient of the atmost quadratic polynomial P that interpolates f on the set {a,b,c}. When a=b then one choses P such that P'(a)=f'(a) as well. When a=b=c then one also takes P''(a)=f''(a)." This is the point of exercise V.3.7, but it makes for easier reading for the definition to be conceptual and let the exercise be to work out the algebraic consequences.
Furthermore, if desired one can actually avoid this calculation and proceed to the proof of Theorem V.3.10. (Just replace f by interpolating polynomials and evaluate everything by by algebra. It has the flavor of Feynman diagrams.)
4. In Hansen and Pedersen "Jensen's operator inequality," Bulletin of the London Mathematial Society," 35 pp. 553564 (2003); arXiv:math.OA/0204049 (2002), the original authors of the noncommutative jensen inequality state
"With hindsight we must admit that we unfortunately proved and used [a different formulation of the noncommutatitve Jensen's inequality]. However, this necessitated the further conditions that 0 is an element of I and that f(0) < 0, conditions that have haunted the theory since then."
Bhatia's presentation is somewhat outofdate because it does not include the more uptodate Jensen's inequality from the more recent work cited above. (Note that the more recent paper occured after the current 1996 edition of Bhatia was published.)
Furthermore, in the same paper, Hansen and Pedersen also introduce a nice version Jensen's trace inequality. It is the same as their sharper form of Jensen's operator inequality, except that both sides have a trace in front and that the operator convex function f is replaced by an arbitrary (scalar) convex function f:R>R. (f acts on matrices using the functional calculus). In particular, the trace inequality is much simpler to prove and more widely applicable although less powerful.
5. It would be nice in future editions(?) to include a reference to Petz and Nielsen's nice little proof of strong subadditivity of the von Neuman entropy.
Chapter 7:
I would have liked to see section 7.1 replaced with the following theorem statement (very similar to what's already in 7.1), and see it proved without chosing an arbitrary basis. (Using an arbitrary basis makes Bhatia's proof of the CS theorem a bit messy, but a reformulation avoids that.)
Definition: A unitary map U on a Hilbert space is a planer rotation iff
U restricts to the identity on a subspace P of codimension 2, and P is unitarily equivalent to
cos(t) sin(t)
sin(t) cos(t)
on P.
Theorem: Let E and F be distinct subspaces of the Hilbert space H, with dim E = dim F. Then there exists a set of planer rotations {R_i} with the properties that
1. The twodimensional rotation subspaces of the R_i are mutually orthogonal and intersect E and F. (In particular, the R_i commute.)
2. Each R rotates by an angle theta in (0, pi/2].
3. E is rotated onto F by the product of the R_i.
Furthermore, the collection of angles theta_i is uniquely determined by E and F, including multiplicity. If the angles theta_i are distinct and strictly less than pi/2 then the corresponding R_i are also uniquely determined.
Further remark on chapter VII: There is an error on page 223. The author states "we have a bijection psi from H tensor H onto L(H), that is linear in the first variable and conjugate linear in the second variable".
This is impossible, since (lamda v) tensor w = w tensor (lamda w). In particular, any map that is linear in the first variable is necessarily linear in the second variable. The practice of introducing a map from H tensor H to L(H) is a cause of much ugly basisinvariancebreaking in quantum information theory and consequently should be discouraged.
21 people found this helpful
Report abuse
Reviewed in the United States on December 26, 2014
Verified Purchase
Excellent. The deepest book in matrix analysis y have seen.
One person found this helpful
Report abuse
Reviewed in the United States on March 31, 2000
This book is an expansion of the author's lecture notes "Perturbation Bounds for Matrix Eigenvalues" published in 1987. I have used both versions for my students' projects. The book under review centers around the themes on matrix inequalities and perturbation of eigenvalues and eigenspaces. The first half of the book covers the "classical" material of majorisation and matrix inequalities in a very clear and readable manner. The second half is a survey of the modern treatment of perturbation of matrix eigenvalues and eigenspaces. It includes lots of recent research results by the author and others within the last ten years. This book has a large collection of challenging exercises. It is an excellent text for a senior undergraduate or graduate course on matrix analysis.
13 people found this helpful
Report abuse
Reviewed in the United States on October 11, 2007
Nice book. Many useful facts combined in one volume. Real pleasure to read it.
The only drawback is sketchy last chapter (almost no proofs due to the lack of space, I believe).
The only drawback is sketchy last chapter (almost no proofs due to the lack of space, I believe).
One person found this helpful
Report abuse
Pages with related products.
See and discover other items: operator theory
There's a problem loading this menu right now.
Get free delivery with Amazon Prime
Prime members enjoy FREE Delivery and exclusive access to music, movies, TV shows, original audio series, and Kindle books.