- Series: Cognitive Technologies
- Hardcover: 509 pages
- Publisher: Springer; 2007 edition (February 2, 2007)
- Language: English
- ISBN-10: 354023733X
- ISBN-13: 978-3540237334
- Product Dimensions: 6.1 x 1.1 x 9.2 inches
- Shipping Weight: 2.1 pounds (View shipping rates and policies)
- Average Customer Review: 2 customer reviews
- Amazon Best Sellers Rank: #3,235,128 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Artificial General Intelligence (Cognitive Technologies) 2007th Edition
Use the Amazon App to scan ISBNs and compare prices.
Customers who viewed this item also viewed
What other items do customers buy after viewing this item?
From the Back Cover
This is the first book on current research on artificial general intelligence (AGI), work explicitly focused on engineering general intelligence – autonomous, self-reflective, self-improving, commonsensical intelligence.
Each author explains a specific aspect of AGI in detail in each chapter, while also investigating the common themes in the work of diverse groups, and posing the big, open questions in this vital area.
The book will be of interest to researchers and students who require a coherent treatment of AGI and the relationships between AI and related fields such as physics, philosophy, neuroscience, linguistics, psychology, biology, sociology, anthropology and engineering.
About the Author
The chief editor of the book, Dr. Ben Goertzel, has published 4 research treatises in AI, cognitive science and systems theory, a biography of Linus Pauling, and one previous edited volume (in the area of dynamical psychology), as well as numerous research papers (for his CV, see www.goertzel.org/ben/newResume.htm).
Dr. Ben Goertzel has been involved in AI research and application development since the late 1980’s. He holds a PhD in mathematics from Temple University, and over the period 1989-1997 he held several university faculty positions in mathematics, computer science, and psychology, in the US, New Zealand and Australia.
Dr. Goertzel is author of numerous research papers and journalistic articles, a biography of Linus Pauling, and five scholarly books dealing with topics in the cognitive sciences, including Chaotic Logic (Plenum Press, 1994), and Creating Internet Intelligence (Plenum Press, 2001).
Currently, as CEO of the software firms Biomind LLC and Novamente LLC, he is leading a team of AI researchers in the development and commercialization of Artificial General Intelligence technology.
Cassio Pennachin has been leading software development projects since the mid-1990's, in artificial intelligence, bioinformatics, operations research and other areas. Prior to taking on his current role as CTO of Novamente LLC and Biomind LLC, he served as founder and CEO of Vetta Technologies, a Brazil-based software consulting firm, and he led a team developing mass spectrometry data analysis software for Proteometrics. From 1998-2001 Cassio was the former VP of R&D at Webmind Inc., leading several projects in AI, data mining and information retrieval.
Ben and Cassio are the chief architects of the Novamente AI Engine, one of the AGI projects described in the book.
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
But again, progress has been made in artificial intelligence: there are intelligent machines and they are used quite extensively in business and industry. But these machines are limited if one judges them from the standpoint of what is possible using human intelligence. The algorithms, or reasoning patterns that they deploy, are limited to working in a specific domain, such as finance, radiology, or network engineering. Human intelligence on the contrary can function in many different domains: a good chess player can also be a good musician or a good architect. Of course one can easily place algorithms in a particular machine each one of which has expertise in a particular domain, but they cannot cross over from one domain to another without considerable alteration from the designer or specialist. And any change in one domain-specific algorithm or reasoning pattern will not effect the efficacy of another algorithm or reasoning pattern with expertise in a different domain. To make an analogy with what is often discussed in the field of cognitive science, the machines of today thus have "modularized" intelligence: the modules or "programs" or "software" are designed to "think" in a certain domain or perform tasks restricted to certain domains.
There are a few in the artificial intelligence community that believe that genuine machine intelligence must at least be domain-independent, along with exhibiting curiosity and an ability to adapt to radically new situations. Such intelligence, in analogy with the human case, must be general enough to deal with situations, challenges, and contexts that are not tied to one domain. This has been called 'artificial general intelligence' (AGI) and is the subject of this book. It is a collection of articles by some of the individuals who have been actively involved in AGI and are working hard to bring it to fruition. The challenges in doing this are enormous, due in part to the paucity in funding for such endeavors, but due mostly to the conceptual difficulties involved in constructing reasoning patterns that can operate in many different domains without the assistance of the human engineer/designer. Suffice it to say that the goals that are discussed in this book represent the most ambitious projects ever attempted in the history of technology.
To assess or monitor the progress in AGI requires that one have at least a working definition of intelligence, and in the article by Pei Wang entitled "The Logic of Intelligence" this requirement is articulated clearly, albeit in a more general context. Wang asks whether there is an "essence of intelligence" that distinguishes intelligent entities from non-intelligent ones. His question is an interesting one since answering it will be necessary if one, again, is to gauge the progress in AGI. If the boundary between non-intelligent and intelligent systems is ill-defined then making claims regarding the status of AGI would be unfounded. But the definition of intelligence must also be one that is fruitful in a practical sense, since if AGI is to be successful it must have wide application in business, industry, and education. Wang settles on a "working" definition of intelligence, which he regards as a definition that is realistic enough to allow researchers to work directly with it. Such a definition will be robust in the sense that it is simple, has a close proximity to the concept to be defined, and allows a certain degree of progress to be made. His working definition of intelligence can be categorized as an adaptive one, in that it asserts that an intelligent machine is one that can adapt to its environment while having only insufficient knowledge and resources. The machine is therefore able to take the initiative to change its knowledge base or reasoning patterns as it confronts novel situations in the environment. He is careful to note what an unintelligent machine would be like, namely one that has been designed with the explicit assumption that the problems it attempts to solve are exclusively those that it has the knowledge and resources for, i.e. such a machine would be "programmed" to tackle certain problems of interest to the user, and would be given only those snippets of knowledge or expertise deemed relevant by this user. If the user were to give an intelligent machine this same collection of problems, it may not be able to find the solution more efficiently than the unintelligent one (or even find the "correct" solution), as the time scales needed for adaptation may be too long relative to the time needed for the unintelligent machine to solve the problems. The author recognizes this possible degradation in performance when using an intelligent machine, and such an issue will be very important when decisions are being made to deploy intelligent machines in time-critical situations or in situations where human or animal health is at stake.
Wang calls his version of AGI the 'Non-Axiomatic Reasoning System' (NARS) which deploys 'experience-grounded semantics', the latter of which is too be distinguished from the 'model-theoretic' semantics that is used in ordinary computing machines and is the foundation of much of theoretical computer science. In NARS, truth is dependent on the amount of evidence that is available, as is the meaning essentially. Wang also discusses in detail the need for `categorical logic' for knowledge representation, again since the machine is expected to operate with insufficient knowledge and resources, where `evidence' plays the key role in deciding the truth of statements (and not mere assignments of `T' or `F'). The NARS system will arrive at a solution that is `reasonable,' i.e. an optimal solution based on the knowledge it has at the time. Mistakes of course can be made, and in fact should be made, since otherwise the machine cannot learn from experience (even though trial and error learning is within the author's boundaries of what he considers intelligent). Therefore, an intelligent machine of the NARS type will not be "fool proof and incapable of error" to quote a line from a popular Hollywood movie. It will however constantly update it its knowledge base, a feature that the author calls `self-revisable'. He does not really say if such a machine could exhibit curiosity, i.e. do the problems it attempts to solve have to be instigated by the user or does it take the initiative to explore new knowledge bases or domains? If so, then such a machine might cause problems in deployment, since it can wander in conceptual space and not focus on the problems it was put in place to solve. However he does allow for autonomous behavior and creativity in the machine, even to such a degree that it completely loses track of the input tasks, i.e. the input tasks become `alienated' to use his words. In this regard, a NARS machine is somewhat like a human philosopher, for it can explore large conceptual spaces on its own and possibly get lost in them. Or more positively, it can find new knowledge that it did not possess before and construct concepts novel to itself (i.e. express `local creativity').
There are many other interesting discussions throughout the book, with each author outlining his/her notion of what it means for a machine to be intelligent and various strategies for constructing intelligent machines. One of these, called the Novamente project has been widely discussed in online messaging and is probably the oldest attempt to bring about AGI of those discussed in the book (at least from the standpoint of its origins). Particularly interesting in the Novamente project is its connection with dynamical systems, specifically in the role of attractors. Even though they do not mention it, the property of `shadowing' in the theory of dynamical systems may be a fruitful one for them to consider, especially in their use of `terminal attractors'. The shadowing property, if possessed by the `mind' of Novamente, would guarantee that an arbitrary dynamic pattern may not be a true `concept map' (as the authors define concept map), but it would be an approximation to some concept map. The shadowing property would guarantee that the reasoning patterns would be domain-independent, since any concept map acting on a particular domain, could be represented or approximated by some reasoning pattern. This reviewer does not know if the shadowing property has been applied to artificial intelligence, or even to neural networks, but if the dynamical systems paradigm holds in the latter, it does seem like an idea that may hold some promise, however small, for the development of domain-independent artificial intelligence.