Broadly educated in poetry and computers and deeply immersed in philosophy, Brian Christian writes about his becoming The Most Human Human. The depth and breadth of his exposition, the importance of the idea -- how will we know if machines become humanly intelligent -- and the topic of a Turing Test Contest, make for a wonderful read. His writing is charming, elegant, guaranteed to inform, and sure to intrigue.
Mr. Christian's central theme is his participation in a Turing Test contest created by Hugh Loebner and Robert Epstein ([...]), an idea originated by the British computer genius Alan Turning. Turning proposed that a computer is intelligent when a person (a "judge") typing and receiving notes both from another person and from a computer cannot tell which correspondent is the human. Each year since 1991, the Loebner Prize has been awarded to the computer program that best fools the judges. A corresponding prize goes to the most human human; the person, among several, who judges rate most certainly to be a human. Mr. Christian won this award in 2009.
Mr. Christian, more often than not, subordinates his description of the contest itself to the subtitle of his book -- "What It Means to Be Alive." In short, interrelated sections that show his intense preparation for the Loebner competition, he relates computer contexts and our daily lives. I particularly liked his treatment of the concept "book" as applied to Gary Kasparov's chess match with IBM's Deep Blue Computer algorithm. Chess, as played by man and machine, includes openings and endings that can be "memorized" -- this is the "book" -- the previous established series of chess moves that humans and machines store in their memories. Thus, oftentimes, it is only in the middle game that chess skills come into play. Mr. Christian wonderfully shows us how the "book" concept is of general human importance, concluding, "And the book, for me, becomes a metaphor for the whole of life." He similarly wows readers with his discussion of data compression.
No less interesting are his other tales and insights. For example, he retells the story of Professor Kevin Warwick of the University of Reading who, in the late 1990s and early 2000s, had various electronic devices implanted in his arm. Among these devices, the professor used active ultrasonic sensors to mimic sonar as his sixth sense -- he could "feel" objects without touching them. With another implanted device, Warwick remotely communicated with his wife who also had electronic implants: this was the first ever purely electronic communication conducted between two human nervous systems. Beyond these few examples, Mr. Christian enlightens us as to how computer programs have trouble with "barge-in" conversations, why "apricot" and "prescient" have the same root, and more.
Although Mr. Christian doesn't explicitly draw the conclusion, one can infer from his writing that Alan Turing was wrong. The Turning Test seems unable to provide more than a superficial evaluation of intelligence. A machine with no "life," body, history, or actual experiences seems quite unable ever to convince us that it possesses a true intellect by winning this sort of contest.
Still, if the Turing Test is ultimately a poor barometer of computer capability, the greater question remains: "can machines ever become humanly intelligent? Mr. Christian barely offers his opinion on this matter, only writing near the very end that, "Some people imagine the future as a kind of heaven... [e.g. Ray Kurzweil] ... Others ... as a kind of hell," [e.g. The Matrix]. I'm no futurist, but I ... think of ... AI as a kind of purgatory: the place where the flawed, good-hearted go to be purified -- and tested -- and to come out better on the other side."
I, and most probably other readers, would have liked more such commentary, to know what Mr. Christian thinks about humankind's future in the face of rising machine intelligence. This is an under-appreciated concern that deserves our awareness.
Interestingly, the 2009 Loebner Prize competition was a perfect opportunity to focus our attention. The other winner that year -- the person who won the most human computer award -- was David Levy, who also wrote Love + Sex with Robots, which I use it in my Queens College, CUNY Sociology course Posthuman Society. Levy argues that by 2050, humans will be conversing with, forming social relations with, having sex with, and perhaps even marrying with autonomous robots. Surely, if this happens -- and Levy's strong credentials make him a credible prognosticator -- we will be forced to conclude that machines have become intelligent, no matter how strange or imperfect their programming may seem. And with this, humankind's future will be forever changed -- I don't think for the better -- even if we survive the experience. Of course, Levy could be wrong. Producing the advanced robots that he envisions may require too enormous an effort, if it's even possible.
But I don't think Levy is wrong. The New York Times (8/16/11), for example, reports that Stanford University will offer a free online course in AI this fall that is taught by two leading experts. More than 58,000 people worldwide have already registered for the course, which was only advertised virally. Why such great interest? Because people are curious, in part, but also because NASA needs intelligent robots to explore space. Our military has deployed intelligent machines to fight in Afghanistan. Business wants smart robots to manufacture cheaper and better goods. Google is spearheading the production of robotically driven cars. Japan seeks intelligent robots to care for its aging population. And, sharing love and sex with machines is already well underway. Smart robots are going to solve many human problems but also create others, with dramatic consequences, a future that I believe is inevitable.
That said, my comments should in no way detract from Brian Christian's marvelous book. He is a gifted informative writer with a keen eye for the human condition. I look forward with great anticipation to curling up with his next provocative volume.