- Paperback: 542 pages
- Publisher: Oxford University Press, Usa; 1 edition (March 1, 2013)
- Language: English
- ISBN-10: 0199671222
- ISBN-13: 978-0199671229
- Product Dimensions: 6.1 x 1.2 x 9.1 inches
- Shipping Weight: 2 pounds (View shipping rates and policies)
- Average Customer Review: 16 customer reviews
- Amazon Best Sellers Rank: #814,139 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
In All Likelihood: Statistical Modelling And Inference Using Likelihood 1st Edition
Use the Amazon App to scan ISBNs and compare prices.
Frequently bought together
Customers who bought this item also bought
"This is a splendid book with its contents thoroughly covering all likelihood ... Statements are firm, and explanations are full and clear. This book may be used as a reference work. It is strongly recommended as an academic library volume, and individually for statistics lecturers, advanced
students, and researchers."
--The Mathematical Gazette
"To those of us to whom it is a continuing irritation to be told that there are only two kinds of statisticians, freqentist and Bayesian, this book will come as an enormous relief ... a remarkable book, which deserves the widest distribution; I hope it will gain many converts to the likelihood
About the Author
Yudi Pawitan is a Professor in the Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Sweden
Try the Kindle edition and experience these great reading features:
Showing 1-8 of 16 reviews
There was a problem filtering reviews right now. Please try again later.
The examples make this book really useful compared to more technical texts like Bickel & Doksum or Lehmann. (These books are useful, of course, but not so much as texts for courses.) Pawitan's book has tons of really great little examples that bring the concepts down to earth for the reader. For instance, when he plots four score functions (normal, Poisson, binomial and Cauchy), you *see* immediately why estimation is more difficult in models such as the Cauchy compared to the normal. It also builds intuition about what the score function actually is. I have unpublished notes from John Marden (Statistics, UIUC), who was my statistical theory professor, which are very, very good. Pawitan's book is on par. The fact that the R code is available is fantastic.
Reading Dr. Pawitan's book introduced me to a very satisfying "third way" as he calls it. Instead of force-fitting all uncertainty into a probability, the "likelihood" approach recognizes two types of uncertainty, which is both novel in statistics and extremely refreshing once you understand why two types are necessary. The first, which I would call "well calibrated" uncertainty, is analogous to a confidence interval for the mean of a normal sample. With this type, we know how often we would be wrong under repeated sampling from this population, so we have a good idea how well our method brackets the true mean i.e., well calibrated.
The other type of uncertainty is unique to the likelihood approach. This type of uncertainty arises if precise, repeated sampling error rates cannot be derived or estimated. In this case you are left with basically two choices (apart from collecting more data): create asymptotically well-calibrated inferences (i.e., assume that as N -> infinity, your repeated-sampling probability statements would become more precise) or admit that you do not precisely know the error rate and then rely solely on the likelihood and perhaps some non-frequentist calibration metric (Dr. Pawitan shows how to do this using the AIC to "calibrate" the likelihood). This type of uncertainty is NOT stated in terms of probability, which I find incredibly honest, as giving probabilities gives the air of more accuracy/knowledge than is usually warranted with complex models.
From an applied standpoint, I think the likelihood approach is superior to the Bayesian approach not because it is necessarily more accurate, but because it possesses a far less cumbersome theoretical apparatus while retaining all the flexibility and elegance of a Bayesian approach. Likelihood methods do not require Markov Chain Monte Carlo, nor do they require jacobians for transformations on inferences - instead, all you need is a good old "root finder" to solve essentially all problems. Simulation is useful if you are doing bootstrapping along with likelihood, but it is not an essential part of the "inference machine" so to speak. For missing data, you can use Expectation Maximization, which again only requires a simple computer package. Finally, you can incorporate prior information (subjective opinion or objective/data) as a prior likelihood, which unlike the prior probability does not need to integrate to one (in Bayesian stats, you have to resort to "improper priors" and hope for the best for situations where you want to represent complete ignorance). Also, a prior likelihood is probably psychologically closer to what most of us do when we evaluate the prior plausibility of a hypothesis, as we usually aren't very good at estimating raw probabilities.
The actual book is very complete, with good coverage of the fundamental mathematical statistics. Sometimes he could be clearer about his motivation for a particular topic, but overall I found it an excellent applied statistics text with great theoretical underpinning. If you are looking for a modern, flexible, and nuanced approach to applied statistics, you can do no better than this book.