- Paperback: 416 pages
- Publisher: Cambridge University Press; 1 edition (January 28, 2008)
- Language: English
- ISBN-10: 0521717701
- ISBN-13: 978-0521717700
- Product Dimensions: 7.4 x 0.9 x 9.7 inches
- Shipping Weight: 2 pounds (View shipping rates and policies)
- Average Customer Review: 10 customer reviews
Amazon Best Sellers Rank:
#1,904,692 in Books (See Top 100 in Books)
- #199 in Books > Computers & Technology > Computer Science > AI & Machine Learning > Natural Language Processing
- #273 in Books > Computers & Technology > Computer Science > AI & Machine Learning > Neural Networks
- #393 in Books > Computers & Technology > Computer Science > AI & Machine Learning > Computer Vision & Pattern Recognition
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Pattern Recognition and Neural Networks 1st Edition
Use the Amazon App to scan ISBNs and compare prices.
The Amazon Book Review
Author interviews, book reviews, editors picks, and more. Read it now
Frequently bought together
Customers who bought this item also bought
This book uses tools from statistical decision theory and computational learning theory to create a rigorous foundation for the theory of neural networks. On the theoretical side, Pattern Recognition and Neural Networks emphasizes probability and statistics. Almost all the results have proofs that are often original. On the application side, the emphasis is on pattern recognition. Most of the examples are from real world problems. In addition to the more common types of networks, the book has chapters on decision trees and belief networks from the machine-learning field. This book is intended for use in graduate courses that teach statistics and engineering. A strong background in statistics is needed to fully appreciate the theoretical developments and proofs. However, undergraduate-level linear algebra, calculus, and probability knowledge is sufficient to follow the book. --This text refers to an out of print or unavailable edition of this title.
"...an excellent text on the statistics of pattern classifiers and the application of neural network techniques...Ripley has managed...to produce an altogether accessible text...[it] will be rightly popular with newcomers to the area for its ability to present the mathematics of statistical pattern recognition and neural networks in an accessible format and engaging style." Nature
"...a valuable reference for engineers and science researchers." Optics & Photonics News
"The combination of theory and examples makes this a unique and interesting book." International Statistical Institute Journal
Browse award-winning titles. See more
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
I came at the book with a computer science background (and no prior neural network experience) and found the material rather difficult to follow. The statistics and math needed to really follow the book was more than I expected.
This doesn't mean the book is bad. After skimming through it a couple of times I really believe that the other reviewers are right -- this is a great resource on neural networks. However, just be sure you have the appropriate background to really get the most out of it.
If you are looking for an introductory book on neural nets or are a little rusty on your statistics and math I would recommend looking elsewhere.
What sells me on this book quite frankly is that is always keeps an eye on a real-world example. No model or algorithm is introduced without a real-world problem it was intended to solve. You would be better served by the Bishop book (Neural Networks for Pattern Recognition, by C.Bishop ISBN:0198538642) if you are looking for a quick introduction. I would say Ripley's book is the <it>perfect second book on the subject</it>.
I must aplaud the editors and designers of the book. A book itself, apart from the material it covers, is an aestetically most pleasent creation for the somewhat dry subject. Its use of margins is a piece of art - margins are wide, accessible, important points are highlighted there, and you can get to the needed point by flipping the pages quickly. The quality of paper is very good, the book opens wells, and holds its form very well. If you take it seriously and use it often, these qualities will gain in importance.
However, a statistical theory of nonlinear classification algorithms shows that these methods have nice properties and have mathematical justification. The statistical pattern recognition research is well over 30 years old and is very well established. So these connections are important for putting neural networks on firm ground and providing greater acceptability from the statistical as well as the engineering community.
Ripley provides a theoretical threatment of the state-of-the-art in statistical pattern recognition. His treatment is thorough, covering all the important developments. He provides a large bibliography and a nice glossary of terms in the back of the book.
Recent papers on neural networks and data mining are often quick to generate results but not very good at providing useful validation techniques that show that perceived performance is not just an artifact of overfitting a model. This is an area where statisticians play a very important role, as they are keenly aware through their experience with regression modeling and prediction, of the crucial need for cross-validation. Ripley covers this quite clearly in Section 2.6 titled "How complex a model do we need?"
It is nice to see the thoroughness of this work. For example, in error rate estimation, many know of the advances of Lachenbruch and Mickey on error rate estimation in discriminant analysis and the further advances of Efron and others with the bootstrap. But in between there was also significant progress by Glick on smooth estimators. This work has been overlooked by many statisticians probably because some of it appears in the engineering literature (but one important paper was in the Journal of the American Statistical Association [JASA] in 1972). To some extent this oversight may be due to the fact that it was not mentioned in Efron's famous 1983 JASA paper and hence is usually missed in the bootstrap literature. Bootstrap methods and cross-validation play a prominent role in this text.
This is an excellent reference book for anyone seriously interested in pattern recognition research. For applied and theoretical statisticians who want a good account of the theory behind neural networks it is a must.