Qty:1
  • List Price: $129.00
  • Save: $24.77 (19%)
Only 3 left in stock (more on the way).
Ships from and sold by Amazon.com.
Gift-wrap available.
+ $3.99 shipping
Used: Good | Details
Sold by -Daily Deals-
Condition: Used: Good
Comment: This Book is in Good Condition. Used Copy With Light Amount of Wear. 100% Guaranteed.
Access codes and supplements are not guaranteed with used items.
Sell yours for a Gift Card
We'll buy it for $50.88
Learn More
Trade in now
Have one to sell? Sell on Amazon
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images

Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach Hardcover – December 4, 2003

ISBN-13: 978-0387953649 ISBN-10: 0387953647 Edition: 2nd

Buy New
Price: $104.23
27 New from $93.65 19 Used from $84.48
Rent from Amazon Price New from Used from
Kindle
"Please retry"
$35.88
Hardcover
"Please retry"
$104.23
$93.65 $84.48
Free Two-Day Shipping for College Students with Amazon Student Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student


Frequently Bought Together

Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach + Model Based Inference in the Life Sciences: A Primer on Evidence
Price for both: $140.72

Buy the selected items together
NO_CONTENT_IN_FEATURE
Best Books of the Year
Best Books of 2014
Looking for something great to read? Browse our editors' picks for 2014's Best Books of the Year in fiction, nonfiction, mysteries, children's books, and much more.

Product Details

  • Hardcover: 488 pages
  • Publisher: Springer; 2nd edition (December 4, 2003)
  • Language: English
  • ISBN-10: 0387953647
  • ISBN-13: 978-0387953649
  • Product Dimensions: 6.1 x 1.1 x 9.2 inches
  • Shipping Weight: 1.8 pounds (View shipping rates and policies)
  • Average Customer Review: 4.2 out of 5 stars  See all reviews (9 customer reviews)
  • Amazon Best Sellers Rank: #465,205 in Books (See Top 100 in Books)

More About the Author

Discover books, learn about writers, read author blogs, and more.

Customer Reviews

4.2 out of 5 stars
5 star
5
4 star
3
3 star
0
2 star
0
1 star
1
See all 9 customer reviews
Chapter 1 is a great introduction that everyone should read.
Michael R. Chernick
Burnham does a tremendous job of explaining both how and why to use information theory to fit statistical models to data.
MR GEORGE S YOUNG
Burnham & Anderson provide a very clear explanation of why we may wish to switch from fixed to random effects.
N. Tuzov

Most Helpful Customer Reviews

38 of 39 people found the following review helpful By Michael R. Chernick on February 9, 2008
Format: Hardcover
Burnham and Anderson have put together a scholarly account of the developments in model selection techniques from the information theoretic viewpoint. This is an important practical subject. As computer algorithms become more and more available for fitting models and data mining and exploratory analysis become more popular and used more by novices, problems with overfitting models will again raise their ugly heads. This has been an issue for statisticians for decades. But the problems and the art of model selection has not been commonly covered in elementary courses on statistics and regression. George Box puts proper emphasis on the iterative nature of model selection and the importance of applying the principle of parismony in many of his books. Classic texts on regression like Draper and Smith point out the pitfalls of goodness of ift measures like R-square and explain Mallows Cp and adjusted R-square. There are now also a few good books devoted to model selection including the book by McQuarrie and Tsai (that I recently reviewed for Amazon) and the Chapman and Hall monograph by A. J. Miller.
Burnham and Anderson address all these issues and provide the best coverage to date on bootstrap and cross-validation approaches. They also are careful in their historical account and in putting together some coherence to the scattered literature. They are thorough in their references to the literature. Their theme is the information theoretic measures based on the Kullback-Liebler distance measure. The breakthrough in this theory came from Akaike in the 1970s and improvements and refinement came later. The authors provide the theory, but more importantly, they provide many real examples to illustrate the problems and show how the methods work.
Read more ›
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
29 of 30 people found the following review helpful By Neil Frazer on August 23, 2005
Format: Hardcover Verified Purchase
I admire this book very much for its accessible treatment of AIC, but if were reduced in length by half, it would be twice as good. The authors cannot resist repeating themselves, usually several times, especially when giving advice of the "motherhood and apple pie" variety. Another annoying feature is that many references are given for philosophical points, yet sometimes when a useful result is given without proof, no reference is provided. For example, on page 12 an expression for maximized likelihood is given without a derivation or a reference. Inside this fat book there is a thin book crying to be let out.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
5 of 5 people found the following review helpful By Amazon Customer on October 7, 2010
Format: Paperback Verified Purchase
If you want to learn about model selection techniques and multimodel inference, this is your book. In my opinion, the first few chapters should be required reading for anyone using model selection techniques. The later chapters become quite technical (above my head, I'm not ashamed to say!) but they are undoubtedly important as well, and I'll work through them eventually merely due to the merit I find in the chapters I have read.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
3 of 3 people found the following review helpful By N. Tuzov on April 28, 2013
Format: Paperback
This book emphasizes the fundamentals of proper model selection that are pretty hard to master in school. Building a good model is a long process that requires a decent level of qualitative understanding of the data generating mechanism. Based on that knowledge, a sizable amount of work should be done well before the data are plugged into a statistical software package. Of course, if one is in a very data-rich situation, one can get away with "let the computer sort it out" approach, but such cases are rare.

After I finished the book, my understanding of the bias-variance tradeoff principle improved substantially. In particular, one should remember that overfitting is not only about including redundant covariates that then cause abysmal out-of-sample performance. Suppose, based on the qualitative information about the data generating process, you are convinced that certain factor(s) "should" be in the model. As the sample size decreases, you may find out that the "compulsory" factor(s) must be dropped to preserve the optimal bias-variance tradeoff. The catch is that, even if you manage to guess exactly what factors constitute the "true" model, the corresponding regression coefficients still have to be estimated from the data. The more coefficients, the greater the complexity of your model pool. If the sample size is low enough, you will be forced to reduce the complexity to avoid overfitting, which entails excluding some "true" factors from consideration.

Another major statistical concept the book clarified for me is the elusive distinction between "fixed" and "random" effects.
Read more ›
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
1 of 1 people found the following review helpful By Amazon Customer on May 26, 2014
Format: Kindle Edition Verified Purchase
These two authors are very well known for their work and opinions on model selection and alternatives to null-hypothesis testing IN THE 20TH CENTURY. However, this book has declined in utility and is not a 21st century view of statistics, as modern methods and synthetic views have outpaced their dated ideas while incorporating some of the best ideas from the early 20th century. More importantly, it is clear now that the strategy where one approach is trashed in favor of another does not contribute to progress in science. Their straw-man characterizations of standard frequentist statistics and null hypothesis approaches are not useful when it is clear that any approach you utilize has strengths and weaknesses and that one should tailor her or his statistical models to the question, the data, and the parameter estimates of interest. I would not recommend this book to anyone at this date (even though I have purchased it, used it, and recommended it in the past), rather I would urge the authors to read the other pieces in the Ecology (2014; Ecology 95: 609-653) special feature to which they contributed but did not really participate in a collaborative or open-minded way. I agree with the other authors in that feature: plot, check assumptions, estimate parameters, examine effect sizes, plot, think about biological significance, and ignore those who yell too loudly about how there is only one approach to science.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again

What Other Items Do Customers Buy After Viewing This Item?