Select your rental length

Starts: Today
Ends:

Rent From: $33.11

Deliver to your Kindle or other device

Enter a promotion code
or gift card
 
 
 
Sorry, this item is not available in
Image not available for
Color:
Image not available

To view this video download Flash Player

 

Neural Networks and Analog Computation: Beyond the Turing Limit (Progress in Theoretical Computer Science) [Kindle Edition]

Hava T. Siegelmann
3.0 out of 5 stars  See all reviews (5 customer reviews)

Digital List Price: $119.00 What's this?
Print List Price: $169.00
Rent From: $33.11 or Buy Price: $100.96
Save up to: $135.89 (80%) You Save: $68.04 (40%)

  • Print ISBN-10: 0817639497
  • Print ISBN-13: 978-0817639495
  • Edition: 1997
Free Kindle Reading App Anybody can read Kindle books—even without a Kindle device—with the FREE Kindle app for smartphones, tablets and computers.

To get the free app, enter your email address or mobile phone number.

Formats

Amazon Price New from Used from
Kindle Edition
Rent from
$100.96
$33.11
 
Hardcover $115.22  
Shop the new tech.book(store)
New! Introducing the tech.book(store), a hub for Software Developers and Architects, Networking Administrators, TPMs, and other technology professionals to find highly-rated and highly-relevant career resources. Shop books on programming and big data, or read this week's blog posts by authors and thought-leaders in the tech industry. > Shop now

Book Description

The theoretical foundations of Neural Networks and Analog Computation conceptualize neural networks as a particular type of computer consisting of multiple assemblies of basic processors interconnected in an intricate structure. Examining these networks under various resource constraints reveals a continuum of computational devices, several of which coincide with well-known classical models. On a mathematical level, the treatment of neural computations enriches the theory of computation but also explicated the computational complexity associated with biological networks, adaptive engineering tools, and related models from the fields of control theory and nonlinear dynamics. The material in this book will be of interest to researchers in a variety of engineering and applied sciences disciplines. In addition, the work may provide the base of a graduate-level seminar in neural networks for computer science students.


Editorial Reviews

Review

"All of the three primary questions are considered: What computational models can the net simulate (within polynomial bounds)? What are the computational complexity classes that are relevant to the net? How does the net (which, after all, is an analog device) relate to Church’s thesis? Moreover the power of the basic model is also analyzed when the domain of reals is replaced by the rationals and the integers."

—Mathematical Reviews

"Siegelmann's book focuses on the computational complexities of neural networks and making this research accessible...the book accomplishes the said task nicely."

---SIAM Review, Vol. 42, No 3.

From the Back Cover

The theoretical foundations of Neural Networks and Analog Computation conceptualize neural networks as a particular type of computer consisting of multiple assemblies of basic processors interconnected in an intricate structure. Examining these networks under various resource constraints reveals a continuum of computational devices, several of which coincide with well-known classical models. What emerges is a Church-Turing-like thesis, applied to the field of analog computation, which features the neural network model in place of the digital Turing machine. This new concept can serve as a point of departure for the development of alternative, supra-Turing, computational theories. On a mathematical level, the treatment of neural computations enriches the theory of computation but also explicated the computational complexity associated with biological networks, adaptive engineering tools, and related models from the fields of control theory and nonlinear dynamics.

The topics covered in this work will appeal to a wide readership from a variety of disciplines. Special care has been taken to explain the theory clearly and concisely. The first chapter review s the fundamental terms of modern computational theory from the point of view of neural networks and serves as a reference for the remainder of the book. Each of the subsequent chapters opens with introductory material and proceeds to explain the chapter’s connection to the development of the theory. Thereafter, the concept is defined in mathematical terms.

Although the notion of a neural network essentially arises from biology, many engineering applications have been found through highly idealized and simplified models of neuron behavior. Particular areas of application have been as diverse as explosives detection in airport security, signature verification, financial and medical times series prediction, vision, speech processing, robotics, nonlinear control, and signal processing. The focus in all of these models is entirely on the behavior of networks as computer.

The material in this book will be of interest to researchers in a variety of engineering and applied sciences disciplines. In addition, the work may provide the base of a graduate-level seminar in neural networks for computer science students.


Product Details

  • File Size: 2180 KB
  • Print Length: 181 pages
  • Publisher: Birkhäuser; 1997 edition (December 1, 1998)
  • Sold by: Amazon Digital Services, Inc.
  • Language: English
  • ASIN: B000PY3SAK
  • Text-to-Speech: Enabled
  • X-Ray:
  • Lending: Not Enabled
  • Amazon Best Sellers Rank: #1,846,811 Paid in Kindle Store (See Top 100 Paid in Kindle Store)
  •  Would you like to give feedback on images?.


Customer Reviews

3.0 out of 5 stars
(5)
3.0 out of 5 stars
Share your thoughts with other customers
Most Helpful Customer Reviews
4 of 4 people found the following review helpful
4.0 out of 5 stars Elegant theoretical apparatus September 19, 2007
Format:Hardcover
This book provides a systematic overview of a beautiful theoretical apparatus that the author and collaborators have developed for describing the computational power of neural networks. It addresses neural networks from the standpoint of computational complexity theory, not machine learning.

A central issue that arises is what values the neural couplings can take on. The book outlines the consequences of various choices. Rational-valued neural networks turn out to be Turing machines, a contribution of general significance. The book shows (and perhaps unduly emphasizes) that irrational-valued couplings can yield Superturing computation, a result which has been controversial.

If irrational numbers can arise in a computational setting, then the work outlined here is clearly a major landmark that deserves the careful, systematic exposition the book provides. On the other hand, maybe irrational numbers are just not relevant to actual computational devices. (They certainly aren't yet.) If so, the book is still a worthwhile theoretical exercise leading to an elegant set of results. Even if one leans toward the latter option - and I would say that this is probably the vast majority - I don't think any of us really _know_ where the irrational numbers stand vis-a-vis our computational universe.

Even if you intuitively see that an infinitely rich source of information, which is what an irrational number provides, should yield Super-Turing computation, the book is still valuable. (If you don't have this intuition, think about it more!) There is a lot to be gleaned from the non-obvious (at least to me) details of how that intuition works itself out.

The book has more technical flaws.
Read more ›
Comment | 
Was this review helpful to you?
7 of 8 people found the following review helpful
4.0 out of 5 stars Discussion of the consequences, not the original proof October 4, 2008
Format:Hardcover
Skeptics wanting to see the original proof, and how such "machines" can exist as natural phenomena within the constraints of physics, should refer to the author's peer reviewed articles

H.T. Siegelmann, "Computation Beyond the Turing Limit," Science, 238(28), April 1995: 632-637

and

H.T. Siegelmann, "Analog Computational Power" Science 19 January 1996 271: 373

This book discusses the consequences, and the limitations of analog computation using neural networks.
Was this review helpful to you?
16 of 25 people found the following review helpful
Format:Hardcover
A computer is an artifact. Through specific control mechanisms of electric currents it was possible to domesticate natural phenomena, and put them at men service, giving rise to the levels of automation that characterize the world in the turning of the millennium. But a computer is an analog artifact. Paul Cull, from Oregon State University, states this computational anecdote in the following terms: «That analog devices behave digitally is the basis for a large part of electronics engineering and allows for the construction of electronic computers. It is part of the engineering folklore that when the gain is high enough any circuit from a large class will eventually settle into one of two states, which can be used to represent booleans 0 and 1. As far as we can tell, this theorem and its proof has never been published, but it probably appears in a now unobtainable MIT technical report of the1950s.» Recently much work have been done to show that digital computers are a particular class of analog computers that exhibit greater computational power. In fact, digital computers are extreme (weak) analog computers. A book was needed to introduce these ideas to the graduate student on Theoretical Computer Science and to the general researcher on the new field of Non-standard Models of Computation. Hava Siegelmann's book partially fills this gap in the computational literature.
Over the last decade, researchers have speculated that although the Turing model is indeed able to simulate a large class of computations, it does not necessarily provide a complete picture of the computations possible in nature. As pointed out by Hava Siegelmann, the most famous proposals of new models were made by Richard Feynman and Roger Penrose.
Read more ›
Comment | 
Was this review helpful to you?
12 of 20 people found the following review helpful
1.0 out of 5 stars Cogently argued but fatally flawed April 22, 2002
Format:Hardcover
Some of this book is an interesting discussion of the boundries of computability. However, the book's central claim, that you can exceed the Turing limit, requires the storing of infinitely precise variables in a physical device. This is a physical impossibility which no amount of gratuitous logical notation will make go away. Even if you put aside the difficulties of measuring a value to infinite precision, quantum indeterminance and discontinuity will not allow any physical object to store or encode an infinitely precise value in any fashion. Once this premise is seen to be false, most of the other interesting claims in the book, and all the hypercomputational ones, immediately collapse.
Was this review helpful to you?
4 of 10 people found the following review helpful
1.0 out of 5 stars Lots of notation, little content November 27, 2002
By A Customer
Format:Hardcover
This book certainly claims to give much much more than what It actually provides. Trying to read this book, you'll have to swallow a formalism that unfortunately does not pay off. There is absolutely no revolutionary idea, just well known facts and pretention to do better than a TM but based on assumptions that by their sole existence, suffice to do better than any Turing machine, you don't need a whole book to say this. (namely, working with arbitrary precision).
Comment | 
Was this review helpful to you?
Search Customer Reviews
Search these reviews only

More About the Author

Discover books, learn about writers, read author blogs, and more.

Forums

There are no discussions about this product yet.
Be the first to discuss this product with the community.
Start a new discussion
Topic:
First post:
Prompts for sign-in
 


Look for Similar Items by Category