Save Big On Open-Box & Used Products: Buy "Tools for Statistical Inference: Methods for the E...” from Amazon Open-Box & Used and save 19% off the $134.00 list price. Product is eligible for Amazon's 30-day returns policy and Prime or FREE Shipping. See all offers from Amazon Open-Box & Used.
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Tools for Statistical Inference: Methods for the Exploration of Posterior Distributions and Likelihood Functions (Springer Series in Statistics) 3rd Edition
Use the Amazon App to scan ISBNs and compare prices.
There is a newer edition of this item:
Featured Springer resources in mathematics
Explore these featured titles in mathematics. Learn more
Customers who bought this item also bought
Special offers and product promotions
Browse award-winning titles. See more
If you are a seller for this product, would you like to suggest updates through seller support?
Top Customer Reviews
The orientation is toward the Bayesian approach however, with good coverage of prior and posterior distributions, conjugate priors and Bayesian Hierarchical Models. The last chapter on Markov Chain Monte Carlo methods is mostly used for Bayesian inference.
This is a great reference source but can also be used in a graduate level course on mathematical statistics, probably as a supplemental text. There are many useful exercises in this edition. The book is fairly advanced and presupposes an introduction to mathematical statistics at the level of the text by Bickel and Doksum. It also assumes that the reader has had some introduction to Bayesian methods but only at the level of, say, Box and Tiao's text. It does not assume any knowledge of stochastic processes including Markov chains.
Convergence properties for the Markov Chain Monte Carlo algorithms (MCMC) are crucial to their success. Elements of discrete Markov chains are introduced in chapter 6 to make the algorithms understandable, but proof of convergence are avoided because they would involve a more detailed account of Markov chain theory.
Tanner provides a good list of the references that were available in 1996. The research in MCMC methods is continuing to be intense and so there are many good references that have appeared since the publication of this book. Robert and Casella (1999) provides a more detailed and more current treatment but even that book is a couple of years dated.
The EM and data augmentation algorthms are used for problems that are classified as missing data problems. The data may be missing as in a survey where particular questions are not answered by the respondents or it could be censored data as in a medical study or clinical trial. The censored data problem is illustrated by Tanner using the Stanford Heart Transplant data. Mixture models are also handled via these algorithms since the identification of the component that the observation belongs to can be viewed as missing data.
Tanner demonstrates a wide variety of techniques to handle many important problems and he illustrates them on real data. It is nice to have all of this compactly written in just 200 pages!
 It describes the contents of Expectation Maximization (EM) algorithm, data augmentation, etc precisely.
 It is not too thick to make reading a boring experience.
 Graduate level textbook or reference book.
As a substitue, I recommend Gelman et al. Bayesian data analysis which treats the same topics but is much more clear and better explained as well as more modern.
I got this book since this was a text book for a class, but ended up using other books and this one just gathers dust.