"This book gives a thorough but nevertheless self-contained treatment of neural network learning from the perspective of computational learning theory." Mathematical Reviews
"This book is a rigorous treatise on neural networks that is written for advanced graduate students in computer science. Each chapter has a bibliographical section with helpful suggestions for further reading...this book would be best utilized within an advanced seminar context where the student would be assisted with examples, exercises, and elaborative comments provided by the professor." Telegraphic Reviews
This book describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. The authors also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is essentially self-contained, since it introduces the necessary background material on probability, statistics, combinatorics and computational complexity; and it is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.