Tools for Statistical Inference: Methods for the Exploration of Posterior Distributions and Likelihood Functions (Springer Series in Statistics) 3rd Edition
Use the Amazon App to scan ISBNs and compare prices.
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Frequently bought together
- Publisher : Springer; 3rd edition (June 26, 1996)
- Language : English
- Hardcover : 216 pages
- ISBN-10 : 0387946888
- ISBN-13 : 978-0387946887
- Item Weight : 2.38 pounds
- Dimensions : 6.14 x 0.56 x 9.21 inches
- Best Sellers Rank: #2,399,573 in Books (See Top 100 in Books)
- Customer Reviews:
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
The orientation is toward the Bayesian approach however, with good coverage of prior and posterior distributions, conjugate priors and Bayesian Hierarchical Models. The last chapter on Markov Chain Monte Carlo methods is mostly used for Bayesian inference.
This is a great reference source but can also be used in a graduate level course on mathematical statistics, probably as a supplemental text. There are many useful exercises in this edition. The book is fairly advanced and presupposes an introduction to mathematical statistics at the level of the text by Bickel and Doksum. It also assumes that the reader has had some introduction to Bayesian methods but only at the level of, say, Box and Tiao's text. It does not assume any knowledge of stochastic processes including Markov chains.
Convergence properties for the Markov Chain Monte Carlo algorithms (MCMC) are crucial to their success. Elements of discrete Markov chains are introduced in chapter 6 to make the algorithms understandable, but proof of convergence are avoided because they would involve a more detailed account of Markov chain theory.
Tanner provides a good list of the references that were available in 1996. The research in MCMC methods is continuing to be intense and so there are many good references that have appeared since the publication of this book. Robert and Casella (1999) provides a more detailed and more current treatment but even that book is a couple of years dated.
The EM and data augmentation algorthms are used for problems that are classified as missing data problems. The data may be missing as in a survey where particular questions are not answered by the respondents or it could be censored data as in a medical study or clinical trial. The censored data problem is illustrated by Tanner using the Stanford Heart Transplant data. Mixture models are also handled via these algorithms since the identification of the component that the observation belongs to can be viewed as missing data.
Tanner demonstrates a wide variety of techniques to handle many important problems and he illustrates them on real data. It is nice to have all of this compactly written in just 200 pages!
 It describes the contents of Expectation Maximization (EM) algorithm, data augmentation, etc precisely.
 It is not too thick to make reading a boring experience.
 Graduate level textbook or reference book.