Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.

  • Apple
  • Android
  • Windows Phone
  • Android

To get the free app, enter your email address or mobile phone number.

Qty:1
  • List Price: $39.95
  • Save: $3.07 (8%)
In stock on May 30, 2016.
Order it now.
Ships from and sold by Amazon.com. Gift-wrap available.
Artificial Superintellige... has been added to your Cart
+ $3.99 shipping
Used: Very Good | Details
Condition: Used: Very Good
Comment: Minor shelf wear. Unread.
Have one to sell? Sell on Amazon
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images

Artificial Superintelligence: A Futuristic Approach Paperback – June 19, 2015

4.7 out of 5 stars 16 customer reviews

See all 3 formats and editions Hide other formats and editions
Price
New from Used from
Kindle
"Please retry"
Paperback
"Please retry"
$36.88
$34.54 $22.26

Best Books of the Month
See the Best Books of the Month
Want to know our Editors' picks for the best books of the month? Browse Best Books of the Month, featuring our favorite new books in more than a dozen categories.
$36.88 FREE Shipping. In stock on May 30, 2016. Order it now. Ships from and sold by Amazon.com. Gift-wrap available.
click to open popover

Frequently Bought Together

  • Artificial Superintelligence: A Futuristic Approach
  • +
  • Superintelligence: Paths, Dangers, Strategies
Total price: $63.90
Buy the selected items together


Editorial Reviews

Review

"... a very interesting book. Crammed into some 200 pages, index included, the book tries to establish a method of measuring progress in artificial intelligence (AI) by creating an AI analogy to the work of Stephen Cook and others in computational complexity. Specifically, the book introduces the author's concepts of AI-complete and AI-hard as analogies to the computational complexity categories of NP-complete and NP-hard. Yampolskiy (Univ. of Louisville) makes his case in just ten chapters. Chapter 1 introduces the topic of AI-Completeness. Chapters 2 through 8 elaborate the details of the author's vision of superintelligences. Chapter 9, 'Efficiency Theory: A Unifying Theory for Information, Computation, and Intelligence,' brings together the diversity of issues presented in the earlier chapters and does a good job of unifying the book. Yampolskiy presents his thoughts on AI's future in the final chapter. Each chapter includes an impressive collection of references, and the text has a healthy index. In general, this work should interest researchers in both AI and computational complexity. Readers may also wish to consult Nick Bostrom's Superintelligence (CH, Mar'15, 52-3620). Summing up: Highly recommended. Upper-division undergraduates through professionals/practitioners."
―J. Beidler, University of Scranton, Pennsylvania, USA, for CHOICE, March 2016

"Concerns over the existential risks of artificial superintelligence have spawned multiple vectors of research and development into specification, validation, security, and control. Roman Yampolskiy’s Artificial Superintelligence: A Futuristic Approach reviews the relevant literature and stakes out the territory of AI safety engineering. Specifically, Yampolskiy advocates formal approaches to characterizing AIs and systematic confinement of superintelligent AIs. Serious students of AI and artificial general intelligence should study this work, and consider its recommendations."
―Neil Jacobstein, Chair, AI and Robotics, Singularity University at NASA Research Park, and Distinguished Visiting Scholar, MediaX Program at Stanford University

"There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence (AI). And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like the terminator then Prof. Roman Yampolskiy may turn out to be like John Connor―but better. Because instead of fighting by using guns and brawn, he is utilizing computer science, human intelligence, and code."
―Nikola Danaylov, SingularityWeblog.com, September 7, 2015

"In his new book Artificial Superintelligence, Yampolsky argues for addressing AI potential dangers with a safety engineering approach, rather than with loosely defined ethics, since human values are inconsistent and dynamic. … Yampolsky acknowledges the concern of AI escaping confines and takes the reader on a tour of AI taxonomies with a general overview of the field of Intelligence … Yampolsky proposes initiation of an AI hazard symbol, which could prove useful for constraining AI to designated containment areas … For readers intrigued in what safe variety of AI might be possible, the section of Artificial Superintelligence early in the book will be of great interest."
―Cynthia Sue Larson, RealityShifters Blog, September 1, 2015

"… the hot topic that seems to have come straight from science fiction ... vigorous academic analysis pursued by the author produced an awesome textbook that should attract everyone’s attention: from high school to graduate school students to professionals."
―Leon Reznik, Professor of Computer Science, Rochester Institute of Technology

"This new book by Roman Yampolskiy is truly futuristic. I have had the chance to see some of his previous works, and this one is his best so far. Not to be missed by anyone really interested in artificial intelligence and the future of humanity. This book is a tour-de-force with deep insights into artificial intelligence and the future by one of the young experts in this fascinating field."
―Jose Cordeiro, Director, The Millennium Project, Venezuela Node

About the Author

Roman V. Yampolskiy holds a PhD from the Department of Computer Science and Engineering at the University at Buffalo (Buffalo, NY). There, he was a recipient of a four-year National Science Foundation (NSF) Integrative Graduate Education and Research Traineeship (IGERT) fellowship. Before beginning his doctoral studies, Dr. Yampolskiy received a BS/MS (High Honors) combined degree in computer science from the Rochester Institute of Technology in New York State.

Dr. Yampolskiy’s main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence, and games. Dr. Yampolskiy is an author of over 100 publications, including multiple journal articles and books. His research has been cited by numerous scientists and profiled in popular magazines, both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News), and on radio (German National Radio, Alex Jones Show).

NO_CONTENT_IN_FEATURE
China
Engineering & Transportation Books
Discover books for all types of engineers, auto enthusiasts, and much more. Learn more

Product Details

  • Paperback: 227 pages
  • Publisher: Chapman and Hall/CRC; 2015 edition (June 19, 2015)
  • Language: English
  • ISBN-10: 1482234432
  • ISBN-13: 978-1482234435
  • Product Dimensions: 7.1 x 0.6 x 10 inches
  • Shipping Weight: 13.4 ounces (View shipping rates and policies)
  • Average Customer Review: 4.7 out of 5 stars  See all reviews (16 customer reviews)
  • Amazon Best Sellers Rank: #1,085,146 in Books (See Top 100 in Books)

Customer Reviews

Top Customer Reviews

Format: Kindle Edition Verified Purchase
This book reviews the difficulties of securing safe AGI. Overall, I think the greatest service it provides is to show future generations where others have already looked, and failed. And in this regard this book was tremendous.

I also appreciated the comprehensive references on every page, which really lets someone new to the field get caught up rather quickly. I also loved how this books tries to bring more computer scientists into the conversation.

My only critique is how the author's personal fears sometimes leak into the main text. While editorial views are common place in most books, I thought it felt out of place in this otherwise scholarly work.
Comment One person found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
By Calum on April 11, 2016
Format: Paperback
Dr Roman Yampolskiy is a tenured computer scientist at the University of Louisville. He has published over 100 papers and books on artificial intelligence, genetic algorithms and behavioural biometrics. This is obviously a strong pedigree for a book about the subject of how to make sure that the arrival of superintelligence on the planet is an event that works out well for humans – which is probably the single most important challenge facing humanity this century.

Yampolskiy’s preference for a safety engineering approach over an ethics approach to the Friendly AI problem is refreshing. The book faces up squarely to the immense difficulty of controlling an entity which is many times smarter than its would-be controllers, and is an important contribution to a vital field.
Comment One person found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
Format: Paperback
This book by Yampolskiy is a great book for reseachers looking to get their hands dirty, after reading some other primer like Bostrom's Superillegence, to try to do their part to increase the chances of a positive impact of AI on humanity. The book has very throughtful proposals in specific areas and essentially clarifies some muddy concepts in AI. A must have for AI researcher.
Comment 2 people found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
Format: Paperback
Yampolskiy summarizes the Singularity Paradox (SP) as "superintelligent machines are feared to be too dumb to possess common sense." Put in even more simple terms, there is a growing concern about dangers of Artificial Intelligence (AI) amongst some of the world's best-educated and most well-respected scientific leaders, such as Stephen Hawking, Elon Musk, and Bill Gates. The hazards of AI containment are discussed in some detail in Artificial Superintelligence, yet in language easily understandable to the layman.

In Artificial Superintelligence, Yampolskiy argues for addressing AI potential dangers with a safety engineering approach, rather than with loosely defined ethics, since human values are inconsistent and dynamic. Yampolskiy points out that "fully autonomous machines cannot ever be assumed to be safe," and going so far as adding, "... and so should not be constructed." (p 186)

Yampolskiy acknowledges the concern of AI escaping confines, and takes the reader on a tour of AI taxonomies with a general overview of the field of Intelligence, showing a Venn type diagram (p 30) in which 'human minds' and 'human designed AI' occupy adjacent real estate on this nonlinear terrain of 'minds in general' in multidimensional super space. 'Self-improving minds' are envisioned which improve upon 'human designed AI,' and at this very juncture arises the potential for 'universal intelligence,' and the Singularity Paradox (SP) problem.

Yampolskiy proposes initiation of an AI hazard symbol, which could prove useful for constraining AI to designated containment areas, in J.A.I.L. or 'Just for A.I. Location.' Part of Yampolskiy's proposed solution to the AI Confinement Problem includes asking 'safe questions' (p 137).
Read more ›
Comment One person found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
Format: Paperback
This book is a must-read for anyone looking for insight into the fascinating and terrifying world of artificial intelligence. In recent years, safe-guarding AI has gone from being a purely fictional theoretical topic to being a real concern for humanity, with scientists such as Elon Musk and Stephen Hawking drawing attention to this issue. Dr. Yampolskiy, one of the leading scientists in this field, shows the implications of the recent advancements in AI, and proposes practical solutions to what is quickly becoming a practical problem of AI safety.
Comment 3 people found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
Format: Kindle Edition
This strange book has some entertainment value, and might even enlighten you a bit about the risks of AI. It presents many ideas, with occasional attempts to distinguish the important ones from the jokes.

I had hoped for an analysis that reflected a strong understanding of which software approaches were most likely to work. Yampolskiy knows something about computer science, but doesn't strike me as someone with experience at writing useful code. His claim that "to increase their speed [AIs] will attempt to minimize the size of their source code" sounds like a misconception that wouldn't occur to an experienced programmer. And his chapter "How to Prove You Invented Superintelligence So No One Else Can Steal It" seems like a cute game that someone might play with if he cared more about passing a theoretical computer science class than about, say, making money on the stock market, or making sure the superintelligence didn't destroy the world.

I'm still puzzling over some of his novel suggestions for reducing AI risks. How would "convincing robots to worship humans as gods" differ from the proposed Friendly AI? Would such robots notice (and resolve in possibly undesirable ways) contradictions in their models of human nature?

Other suggestions are easy to reject, such as hoping AIs will need us for our psychokinetic abilities.

The style is also weird. Some chapters were previously published as separate papers, and weren't adapted to fit together. It was annoying to occasionally see sentences that seemed identical to ones in a prior chapter.

The author even has strange ideas about what needs footnoting. E.g. when discussing the physical limits to intelligence, he cites (Einstein 1905).

Only read this if you've read other authors on this subject first (such as Bostrom).
Comment 4 people found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse

Pages with Related Products. See and discover other items: game ai