Other Sellers on Amazon
+ $3.99 shipping
90% positive over last 12 months
Usually ships within 3 to 4 days.
+ $3.99 shipping
89% positive over last 12 months
Usually ships within 3 to 4 days.
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle Cloud Reader.
Enter your mobile phone or email address
By pressing "Send link," you agree to Amazon's Conditions of Use.
You consent to receive an automated text message from or on behalf of Amazon about the Kindle App at your mobile number above. Consent is not a condition of any purchase. Message & data rates may apply.
You Are Not a Gadget: A Manifesto Hardcover – Deckle Edge, January 12, 2010
|
Jaron Lanier
(Author)
Find all the books, read about the author, and more.
See search results for this author
|
|
Price
|
New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry"
|
$0.00
|
Free with your Audible trial | |
Enhance your purchase
-
Print length224 pages
-
LanguageEnglish
-
PublisherKnopf
-
Publication dateJanuary 12, 2010
-
Dimensions5.84 x 0.9 x 8.73 inches
-
ISBN-100307269647
-
ISBN-13978-0307269645
Frequently bought together
Customers who viewed this item also viewed
What other items do customers buy after viewing this item?
Editorial Reviews
Amazon.com Review
A Q&A with Author Jaron Lanier
Question: As one of the first visionaries in Silicon Valley, you saw the initial promise the internet held. Two decades later, how has the internet transformed our lives for the better?
Jaron Lanier: The answer is different in different parts of the world. In the industrialized world, the rise of the Web has happily demonstrated that vast numbers of people are interested in being expressive to each other and the world at large. This is something that I and my colleagues used to boldly predict, but we were often shouted down, as the mainstream opinion during the age of television’s dominance was that people were mostly passive consumers who could not be expected to express themselves. In the developing world, the Internet, along with mobile phones, has had an even more dramatic effect, empowering vast classes of people in new ways by allowing them to coordinate with each other. That has been a very good thing for the most part, though it has also enabled militants and other bad actors.
Question: You argue the web isn’t living up to its initial promise. How has the internet transformed our lives for the worse?
Jaron Lanier: The problem is not inherent in the Internet or the Web. Deterioration only began around the turn of the century with the rise of so-called "Web 2.0" designs. These designs valued the information content of the web over individuals. It became fashionable to aggregate the expressions of people into dehumanized data. There are so many things wrong with this that it takes a whole book to summarize them. Here’s just one problem: It screws the middle class. Only the aggregator (like Google, for instance) gets rich, while the actual producers of content get poor. This is why newspapers are dying. It might sound like it is only a problem for creative people, like musicians or writers, but eventually it will be a problem for everyone. When robots can repair roads someday, will people have jobs programming those robots, or will the human programmers be so aggregated that they essentially work for free, like today’s recording musicians? Web 2.0 is a formula to kill the middle class and undo centuries of social progress.
Question: You say that we’ve devalued intellectual achievement. How?
Jaron Lanier: On one level, the Internet has become anti-intellectual because Web 2.0 collectivism has killed the individual voice. It is increasingly disheartening to write about any topic in depth these days, because people will only read what the first link from a search engine directs them to, and that will typically be the collective expression of the Wikipedia. Or, if the issue is contentious, people will congregate into partisan online bubbles in which their views are reinforced. I don’t think a collective voice can be effective for many topics, such as history--and neither can a partisan mob. Collectives have a power to distort history in a way that damages minority viewpoints and calcifies the art of interpretation. Only the quirkiness of considered individual expression can cut through the nonsense of mob--and that is the reason intellectual activity is important.
On another level, when someone does try to be expressive in a collective, Web 2.0 context, she must prioritize standing out from the crowd. To do anything else is to be invisible. Therefore, people become artificially caustic, flattering, or otherwise manipulative.
Web 2.0 adherents might respond to these objections by claiming that I have confused individual expression with intellectual achievement. This is where we find our greatest point of disagreement. I am amazed by the power of the collective to enthrall people to the point of blindness. Collectivists adore a computer operating system called LINUX, for instance, but it is really only one example of a descendant of a 1970s technology called UNIX. If it weren’t produced by a collective, there would be nothing remarkable about it at all.
Meanwhile, the truly remarkable designs that couldn’t have existed 30 years ago, like the iPhone, all come out of "closed" shops where individuals create something and polish it before it is released to the public. Collectivists confuse ideology with achievement.
Question: Why has the idea that "the content wants to be free" (and the unrelenting embrace of the concept) been such a setback? What dangers do you see this leading to?
Jaron Lanier: The original turn of phrase was "Information wants to be free." And the problem with that is that it anthropomorphizes information. Information doesn’t deserve to be free. It is an abstract tool; a useful fantasy, a nothing. It is nonexistent until and unless a person experiences it in a useful way. What we have done in the last decade is give information more rights than are given to people. If you express yourself on the internet, what you say will be copied, mashed up, anonymized, analyzed, and turned into bricks in someone else’s fortress to support an advertising scheme. However, the information, the abstraction, that represents you is protected within that fortress and is absolutely sacrosanct, the new holy of holies. You never see it and are not allowed to touch it. This is exactly the wrong set of values.
The idea that information is alive in its own right is a metaphysical claim made by people who hope to become immortal by being uploaded into a computer someday. It is part of what should be understood as a new religion. That might sound like an extreme claim, but go visit any computer science lab and you’ll find books about "the Singularity," which is the supposed future event when the blessed uploading is to take place. A weird cult in the world of technology has done damage to culture at large.
Question: In You Are Not a Gadget, you argue that idea that the collective is smarter than the individual is wrong. Why is this?
Jaron Lanier: There are some cases where a group of people can do a better job of solving certain kinds of problems than individuals. One example is setting a price in a marketplace. Another example is an election process to choose a politician. All such examples involve what can be called optimization, where the concerns of many individuals are reconciled. There are other cases that involve creativity and imagination. A crowd process generally fails in these cases. The phrase "Design by Committee" is treated as derogatory for good reason. That is why a collective of programmers can copy UNIX but cannot invent the iPhone.
In the book, I go into considerably more detail about the differences between the two types of problem solving. Creativity requires periodic, temporary "encapsulation" as opposed to the kind of constant global openness suggested by the slogan "information wants to be free." Biological cells have walls, academics employ temporary secrecy before they publish, and real authors with real voices might want to polish a text before releasing it. In all these cases, encapsulation is what allows for the possibility of testing and feedback that enables a quest for excellence. To be constantly diffused in a global mush is to embrace mundanity.
(Photo © Jonathan Sprague)
From Publishers Weekly
Copyright © Reed Business Information, a division of Reed Elsevier Inc. All rights reserved.
From Booklist
Review
—Michiko Kakutani, The New York Times
“Important . . . At the bottom of Lanier’s cyber-tinkering is a fundamentally humanist faith in technology, a belief that wisely designed machines can bring us closer together by expanding the possibilities of creative self-expression . . . His mind is a fascinating place to hang out.”
—Ben Ehrenreich, Los Angeles Times
“Persuasive . . . [Lanier] is the first great apostate of the Internet era.”
—David Wallace-Wells, Newsweek
“Thrilling and thought-provoking . . . A necessary corrective in the echo chamber of technology debates. You Are Not a Gadget challenges many dominant ideologies and poses theoretical questions, the answers to which might start with one bright bulb, but depend on the friction of engaged parties. In other words, Lanier is acting like a computer scientist. Let’s hope he is not alone.”
—John Freeman, San Francisco Chronicle
“A call for a more humanistic—to say nothing of humane—alternative future in which the individual is celebrated more than the crowd and the unique more than the homogenized . . . You Are Not a Gadget may be its own best argument for exalting the creativity of the individual over the collective efforts of the ‘hive mind.’ It’s the work of a singular visionary, and offers a hopeful message: Resistance may not be futile after all.”
—Rich Jaroslovsky, Bloomberg.com
“Provocative . . . [Lanier] confronts the big issues with bracing directness . . . The reader sits up. One of the insider’s insiders of the computing world seems to have gone rogue.”
—Sven Birkerts, The Boston Globe
“Sparky, thought-provoking . . . This is good knockabout stuff, and Lanier clearly enjoys rethinking received tech wisdom: his book is a refreshing change from Silicon Valley’s usual hype.”
—Paul Marks, New Scientist
“Lanier’s detractors have accused him of Ludditism, but his argument will make intuitive sense to anyone concerned with questions of propriety, responsibility, and authenticity.”
—The New Yorker
“Poetic and prophetic, this could be the most important book of the year. The knee-jerk notion that the net as it is being developed sets us free is turned on its head . . . Read this book and rise up against net regimentation!”
—Iain Finlayson, The Times (London)
“From crowd-sourcing to social networking and mash-ups, Lanier dismantles the tropes of the current online culture.”
—Bloomberg.com, “Five Top Business Books of 2010”
“Lanier asks some important questions . . . He offers thoughtful solutions . . . Gadget is an essential first step at harnessing a post-Google world.”
—Eli Sanders, The Stranger (Seattle)
“Lanier turns a philosopher’s eye to our everyday online tools . . . The reader is compelled to engage with his work, to assent, contradict, and contemplate. In this, Lanier’s manifesto is not just a success, but a meta-success . . . Lovers of the Internet and all its possibilities owe it to themselves to plunge into Lanier’s [You Are Not a Gadget] and look hard in the mirror. He’s not telling us what to think; he’s challenging us to take a hard look at our cyberculture, and emerge with new creative inspiration.”
—Carolyn Kellogg, Flavorwire
“Inspired, infuriating and utterly necessary . . . Lanier tells of the loss of a hi-tech Eden, of the fall from play into labour, obedience and faith. Welcome to the century’s first great plea for a ‘new digital humanism’ against the networked conformity of cyber-space. This eloquent, eccentric riposte comes from a sage of the virtual world who assures us that, in spite of its crimes and follies, ‘I love the internet.’ That provenance will only deepen its impact, and broaden its appeal.”
—Boyd Tonkin, The Independent (London)
“A must read for 2010.”
—Library Journal
“Lanier’s fascinating and provocative full-length exploration of the Internet’s problems and potential is destined to become a must-read for both critics and advocates of online-based technology and culture . . . He brilliantly shows how large Web 2.0–based information aggregators such as Amazon.com—as well as proponents of free music file sharing—have created a ‘hive mind’ mentality emphasizing quantity over quality.”
—Publishers Weekly
“Jaron Lanier’s long awaited book is fabulous—I couldn’t put it down. His is a rare voice of sanity in the debate about the relationship between computers and human beings. This is a landmark book that will have people talking and arguing for years into the future.”
—Lee Smolin, The Trouble with Physics
“This is the single most important book yet written about our increasingly digital world. It will be remembered either as the manifesto that rescued humanity from the brink of extinction, or as the last cogent missive from an obsolete species.”
—Douglas Rushkoff, author of Life Inc., Media Virus, and Cyberia
“In this sane and spirited critique of Internet dogma, Jaron Lanier also delivers a timely defense of the value of the individual human being.”
—Nicholas Carr, author of Does IT Matter? and The Big Switch
“Important . . . Highly relevant . . . An impassioned and original critique of what the digital world has become . . . A much-needed defence of the humanist values that are being trampled underfoot . . . If ever there was an answer to the question, ‘Who needs thinkers when you have Wikipedia?’, this book is surely it.”
—John Stones, Design Week (UK)
About the Author
Excerpt. © Reprinted by permission. All rights reserved.
THE IDEAS THAT I hope will not be locked in rest on a philosophical foundation that I sometimes call cybernetic totalism. It applies metaphors from certain strains of computer science to people and the rest of reality. Pragmatic objections to this philosophy are presented.
What Do You Do When the Techies Are Crazier Than the Luddites?
The Singularity is an apocalyptic idea originally proposed by John von Neumann, one of the inventors of digital computation, and elucidated by figures such as Vernor Vinge and Ray Kurzweil.
There are many versions of the fantasy of the Singularity. Here’s the one Marvin Minsky used to tell over the dinner table in the early 1980s: One day soon, maybe twenty or thirty years into the twenty- first century, computers and robots will be able to construct copies of themselves, and these copies will be a little better than the originals because of intelligent software. The second generation of robots will then make a third, but it will take less time, because of the improvements over the first
generation.
The process will repeat. Successive generations will be ever smarter and will appear ever faster. People might think they’re in control, until one fine day the rate of robot improvement ramps up so quickly that superintelligent robots will suddenly rule the Earth.
In some versions of the story, the robots are imagined to be microscopic, forming a “gray goo” that eats the Earth; or else the internet itself comes alive and rallies all the net- connected machines into an army to control the affairs of the planet. Humans might then enjoy immortality within virtual reality, because the global brain would be so huge that it would be absolutely easy—a no-brainer, if you will—for it to host all our consciousnesses for eternity.
The coming Singularity is a popular belief in the society of technologists. Singularity books are as common in a computer science department as Rapture images are in an evangelical bookstore.
(Just in case you are not familiar with the Rapture, it is a colorful belief in American evangelical culture about the Christian apocalypse. When I was growing up in rural New Mexico, Rapture paintings would often be found in places like gas stations or hardware stores. They would usually include cars crashing into each other because the virtuous drivers had suddenly disappeared, having been called to heaven just before the onset of hell on Earth. The immensely popular Left Behind novels also describe this scenario.)
There might be some truth to the ideas associated with the Singularity at the very largest scale of reality. It might be true that on some vast cosmic basis, higher and higher forms of consciousness inevitably arise, until the whole universe becomes a brain, or something along those lines. Even at much smaller scales of millions or even thousands of years, it is more exciting to imagine humanity evolving into a more wonderful state than we can presently articulate. The only alternatives would be extinction or stodgy stasis, which would be a little disappointing and sad, so let us hope for transcendence of the human condition, as we now
understand it.
The difference between sanity and fanaticism is found in how well the believer can avoid confusing consequential differences in timing. If you believe the Rapture is imminent, fixing the problems of this life might not be your greatest priority. You might even be eager to embrace wars and tolerate poverty and disease in others to bring about the conditions that could prod the Rapture into being. In the same way, if you believe the Singularity is coming soon, you might cease to design technology to serve humans, and prepare instead for the grand events it will bring.
But in either case, the rest of us would never know if you had been right. Technology working well to improve the human condition is detectable, and you can see that possibility portrayed in optimistic science fiction like Star Trek.
The Singularity, however, would involve people dying in the flesh and being uploaded into a computer and remaining conscious, or people simply being annihilated in an imperceptible instant before a new superconsciousness takes over the Earth. The Rapture and the Singularity share one thing in common: they can never be verified by the living.
You Need Culture to Even Perceive Information Technology
Ever more extreme claims are routinely promoted in the new digital climate. Bits are presented as if they were alive, while humans are transient fragments. Real people must have left all those anonymous comments on blogs and video clips, but who knows where they are now, or if they are dead? The digital hive is growing at the expense of individuality.
Kevin Kelly says that we don’t need authors anymore, that all the ideas of the world, all the fragments that used to be assembled into coherent books by identifiable authors, can be combined into one single, global book. Wired editor Chris Anderson proposes that science should no longer seek theories that scientists can understand, because the digital cloud will understand them better anyway.*
Antihuman rhetoric is fascinating in the same way that selfdestruction is fascinating: it offends us, but we cannot look away.
The antihuman approach to computation is one of the most baseless ideas in human history. A computer isn’t even there unless a person experiences it. There will be a warm mass of patterned silicon with electricity coursing through it, but the bits don’t mean anything without a cultured person to interpret them.
This is not solipsism. You can believe that your mind makes up the world, but a bullet will still kill you. A virtual bullet, however, doesn’t even exist unless there is a person to recognize it as a representation of a bullet. Guns are real in a way that computers are not.
Making People Obsolete So That Computers Seem More Advanced
Many of today’s Silicon Valley intellectuals seem to have embraced what used to be speculations as certainties, without the spirit of unbounded curiosity that originally gave rise to them. Ideas that were once tucked away in the obscure world of artificial intelligence labs have gone mainstream in tech culture. The first tenet of this new culture is that all of reality, including humans, is one big information system. That doesn’t mean we are condemned to a meaningless existence. Instead there is a new kind of manifest destiny that provides us with a mission to accomplish. The meaning of life, in this view, is making the digital system we
call reality function at ever- higher “levels of description.”
People pretend to know what “levels of description” means, but I doubt anyone really does. A web page is thought to represent a higher level of description than a single letter, while a brain is a higher level than a web page. An increasingly common extension of this notion is that the net as a whole is or soon will be a higher level than a brain. There’s nothing special about the place of humans in this scheme. Computers will soon get so big and fast and the net so rich with information that people will be obsolete, either left behind like the characters in Rapture novels or subsumed into some cyber-superhuman something.
Silicon Valley culture has taken to enshrining this vague idea and spreading it in the way that only technologists can. Since implementation speaks louder than words, ideas can be spread in the designs of software. If you believe the distinction between the roles of people and computers is starting to dissolve, you might express that—as some friends of mine at Microsoft once did—by designing features for a word processor that are supposed to know what you want, such as when you want to start an outline within your document. You might have had the experience of having Microsoft Word suddenly determine, at the wrong moment, that you are creating an indented outline. While I am all for the automation of petty tasks, this is different.
From my point of view, this type of design feature is nonsense, since you end up having to work more than you would otherwise in order to manipulate the software’s expectations of you. The real function of the feature isn’t to make life easier for people. Instead, it promotes a new philosophy: that the computer is evolving into a life-form that can understand people better than people can understand themselves.
Another example is what I call the “race to be most meta.” If a design like Facebook or Twitter depersonalizes people a little bit, then another service like Friendfeed— which may not even exist by the time this book is published— might soon come along to aggregate the previous layers of aggregation, making individual people even more abstract, and the illusion of high- level metaness more celebrated.
Information Doesn’t Deserve to Be Free
“Information wants to be free.” So goes the saying. Stewart Brand, the founder of the Whole Earth Catalog, seems to have said it first.
I say that information doesn’t deserve to be free.
Cybernetic totalists love to think of the stuff as if it were alive and had its own ideas and ambitions. But what if information is inanimate? What if it’s even less than inanimate, a mere artifact of human thought? What if only humans are real, and information is not?
Of course, there is a technical use of the term “information” that refers to something entirely real. This is the kind of information that’s related to entropy. But that fundamental kind of information, which exists independently of the culture of an observer, is not the same as the kind we can put in computers, the kind that supposedly wants to be free.
Information is alienated experience.
You can think of culturally decodable in...
Don't have a Kindle? Get your Kindle here, or download a FREE Kindle Reading App.
Product details
- Publisher : Knopf; 1st edition (January 12, 2010)
- Language : English
- Hardcover : 224 pages
- ISBN-10 : 0307269647
- ISBN-13 : 978-0307269645
- Item Weight : 14.4 ounces
- Dimensions : 5.84 x 0.9 x 8.73 inches
-
Best Sellers Rank:
#863,054 in Books (See Top 100 in Books)
- #1,004 in Social Aspects of Technology
- #2,447 in Internet & Telecommunications
- #26,002 in Engineering (Books)
- Customer Reviews:
Customer reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
He wrote in the Introduction to the paperback edition of this 2000 book, “This book is not antitechnology in any sense. It is prohuman. [It] argues that certain specific, popular internet designs of the moment---not the internet as a whole---tend to pull us into life patterns that gradually degrade the ways in which each of us exists as an individual. These unfortunate designs are more oriented toward treating people as relays in a global brain. Deemphasizing personhood, and the intrinsic value of an individual’s unique internal experience and creativity, leads to all sort of maladies…
“While the core argument might be described as ‘spiritual,’ there are also profound political and economic implications. For instance, the idea that information should be ‘free’ sounds good at first. But the unintended result is that all the clout and money generated online has begun to accumulate around the people close to only certain highly secretive computers, many of which are essentially spying operations designed to pull money out of a marketplace…. The implications of the rise of ‘digital serfdom’ couldn’t be more profound. As technology gets better and better, and civilization becomes more and more digital, one of the major questions we will have to address is: Will a sufficiently large middle class of people be able to make a living from what they do with their hearts and heads? Or will they be left behind, distracted by empty gusts of ego-boosting puffery?” (Pg. ix-x)
At the end of Chapter 1, he adds, “in this book, I have spun a long tale of belief in the opposites of computationalism, the noosphere, the Singularity, web 2.0, the long tail, and all the rest. I hope the volume of my contrarianism will foster an alternative mental environment, where the exciting opportunity to start creating a new digital humanism can begin. An inevitable side effect of this project of deprogramming through immersion is that I will direct a sustained stream of negativity onto the ideas I am criticizing. Readers, be assured that the negativity eventually tapers off, and that the last few chapters are optimistic in tone.” (Pg. 23)
He observes, “The coming Singularity is a popular belief in the society of technologists. Singularity books are as common in a computer science department as Rapture images are in an evangelical bookstore… The Singularity… would involve people dying in the flesh and being uploaded into a computer and remaining conscious, or people simply being annihilated in an imperceptible instant before a new superconsciousness takes over the Earth. The Rapture and the Singularity share one thing in common: they can never be verified by the living.” (Pg. 25-26)
He notes, “if you want to make the transition from the old religion, where you hope God will give you an afterlife, to the new religion, where you hope to become immortal by getting uploaded into a computer, then you have to believe information is real and alive. So for you, it will be important to redesign human institutions like art, the economy, and the law to reinforce the perception that information is alive. You demand that the rest of us live in your new conception of a state religion. You need us to defy information to reinforce your faith.” (Pg. 28-29)
He suggests, “It seems to me… that the Turing Test has been poorly interpreted by generations of technologists. It is usually presented to support the idea that machines can attain whatever quality it is that gives people consciousness… What the test really tells us, however… is that machine intelligence can only be known in a relative sense, in the eyes of a human beholder. The AI way of thinking is central to the idea I’m criticizing in this book. If a machine can be conscious, then the computing cloud is going to be a better and far more capacious consciousness than is found in an individual person. If you believe this, then working for the benefit of the cloud over individual people puts you on the side of the angels. But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?” (Pg. 31-32)
He explains, “To help you learn to doubt the fantasies of the cybernetic totalists, I offer two dueling thought experiments. The first [is]… Imagine a computer program that can simulate a neuron… Now imagine a tiny wireless device that can send and receive signals to neurons in the brain… hire a neurosurgeon to open your skull… Replace one nerve in your brain with one of those wireless gadgets. (Even if such gadgets were already perfected, connecting them would not be possible today. The artificial neuron would have to engage all the same synapses---around seven thousand, on average---as the biological nerve it replaced.) Next, the artificial neuron will be connected over a wireless link to a simulation of a neuron in a nearby computer… There are between 100 billion and 200 billion neurons in a human brain, so even at only a second per neuron, this will require tens of thousands of years. Now for the big question: Are you still conscious after the process has been completed?... Does the computer then become a person? If you believe in consciousness, is your consciousness now in the computer, or perhaps in the software? The same question can be asked about souls, if you believe in them.” (Pg. 40)
He points out, “We don’t understand how brains work… there are fundamental questions that have not even been fully articulated yet, much less answered. For instance, how does reason work? How does meaning work?... While the physical brain is a product of evolution as we are coming to understand it, the cultural brain might be a way of transforming the evolved brain according to principles that cannot be explained in evolutionary terms.” (Pg. 50-51)
He states, “In Silicon Valley … there is one belief system … [that] serves as a common framework… I call it computationalism … the underlying philosophy is that the world can be understood as a computational process, with people as subprocesses… I must make it clear that computationalism has its uses. Computationalism isn’t always crazy… If you want to consider people as special, as I have advised, then you need to be able to say at least a little bit about where the specialness begins and ends… If you hope for a technology to be designed to serve people, you must have at least a rough idea of what a person is and is not. But… Dividing the world into two parts, one of which is ordinary---deterministic or mechanistic, perhaps---and one of which is mystifying, or more abstract, is particularly difficult for scientists. This is the dreaded path of dualism. It is awkward to study neuroscience, for instance, if you assume that the brain is linked to some other entity---a soul---on a spirit plane… I am contradicting myself here, but the reason is that I find myself playing different roles at different times. Sometimes I am designing tools for people to use, while at other times I am working with scientists trying to understand how the brain works. Perhaps it would be better if I could find one single philosophy that I could apply equally to each circumstance, but I find that the best path is to believe different things about aspects of reality when I play these different roles…” (Pg. 153-154)
He acknowledges, “The most important thing about postsymbolic communication is that I hope it demonstrates that a humanist softie like me can be as radical and ambitions as any cybernetic totalist in both science and technology, while still believing that people should be considered differently, embodying a special category.” (Pg. 191)
He adds in the Afterword, “While there is a lot of talk in the air about whether to believe in god or not, I suspect that religious arguments are gradually incorporating coded debates about whether to even believe in people anymore. Are people just one form of information system, one form of gadget? The old debates about God are now also about us. For instance, when I suggest we should act as if we’re real---as if consciousness and experience exist, just in case it turns our we ARE real---I am retooling Pascal’s famous wager about God, but in this case applied to people.” (Pg. 206)
This fascinating and thought-provoking book will be “must reading” for those interested in the future of computer science, the philosophy of mind, and the direction modern culture is heading.
I couldn’t give it 5 stars because much of the information I had already gotten from other sources and the author stays within his niche of interest more than I would have liked.
So not mind blowing but not horrible either.
Lanier's personal interest in exotic music fits with his own form of personal revolt against the dehumanizing forces at play. Laced with insightful snippets, Lanier appears "locked" into his own matrix. Lanier fears and believes in a digital future. It's economic and socially revolutionary potential intersects with a growing proportion of humanity. Yet just as Latin (then French) dominated intellectual discord in the West (and arguably English in a more global sense today), the very breadth of intellectual resources will eventually require the return of specialists and revitalize demand for "original" sources in much the same manner that ideas from classical times became the passion of Renaissance humanists.
Lanier identifies the forces and possible implications of modern information technology. As the law of unforeseen consequences would suggest, Lanier defines a window with extensive potential to remake the world as we know it -- or at least perceive it. However, anticipating future trends (the mantra of the human existence) carries a burden of having only the present to project into the future. Lanier's critique of contemporary culture is insightful but it is also limited in application to a passing generation.
Lanier aspires to pull us away from the dehumanizing abyss of current technology. His own evidence regarding the expanding capacity of computers and information technology generally, however, could as easily prove its own undoing as data evades quantification under the weight of its own quantity. Qualitative analyses (along with an appreciation of music) require a mature intuitive cherry-picking of reliable evidence. Or as previous leading thinkers have noticed, logical constructs preclude non-logical outcomes, which in turn preclude progress beyond what is commonly viewed as reasonable or rational. Or as Einstein once noted, paraphrasing, if you want your children to better understand math, teach them fairy tales. Perhaps Lanier can bridge this gap in his next book.
Top reviews from other countries
Have carried this book around with me for a couple of years. Just finished it today. Great read and lots to think about. Would not claim to understand all of the points made but food for thought for anyone like myself who spends much time contributing to social networks.
Lanier deals with a long list of concerns he has with recent developments. In fact one of these relates to information being taken out of context e.g. fragments being reused in various social networks. While reviewing the book - and therefore selecting some of the ideas - I suggest that if you think the subject matter is of interest you should read the full book.
The author addresses the subject of ‘authorship’ - referencing a discussion between Kevin Kelly (who postulates that eventually there will be only one book) and John Updike on the subject. His opionion is that authorship is not a priority for the new ideology promoted by the singularity, the anti humanist computer scientists, promoters of ‘digital maoisim’ or the ‘noosphere’.
Lanier is highly critical of web 2.0 designs which actively demand that people define themselves downwards. Nor is he a fan of Wikipedia - which he sees as (1) a system which removes individual ‘points of view’ and (2) lendds itself to ‘lazy’ search engines serving up its context as its first answer each time.
Lanier also has less expectations of crowd wisdom than James Surowiecki. The author stresses the need for a combination of collective and individual intelligence. In fact he would avoid having crowds frame their own questions. He has concerns for a society that risks mob rule as a follow on from crowd wisdom, in its extreme form.
Interestingly the author claims to be optimistic and to see benefits in technology. But the technology should exist to server people and to improve the human condition. He seems to be unconvinced about the benefits of much of the web 2.0 culture and associated ideology. He sees it lending itself to a winner take all - the lords of the cloud and search - while the creators of cultural experiences will work for very little (if anything at all).
He spends a reasonable amount of time looking at modern music and suggesting that we have lost much of the creativity of previous generations - that in fact much so what we hear is rehash of previously created music. Later in the book he also references phenotropics (his own programming/ development environment).
Lanier is encouraging everyone to value their own individualism - in this context we are all encouraged to be expressive in our website content, to be reflective and to take more time in preparing blog postings. His concern is that we are devaluing the individual and are at risk of ‘spirituality committing suicide’ as consciousness wills itself out of existence.
He is a long way from accepting the Ray Kurzweil view (‘singularity’) - that the computing cloud will scoop up the contents of our brains so we can live in virtual reality’. While not necessarily signing up to all of his commentary and analysis (e.g. re music) I certainly find myself more aligned to the humanist than the ‘noosphere’ group.
One of the over-arching themes here is that although we often find the internet, and technology and all things spawned from these areas as outstanding advances that have improved society and humanity, there is a general trend towards lack of innovation rather than towards true novelty. Lanier makes this point well and it seems we as a species would do well to heed his warning and make conscious efforts and choices to encourage innovation and creativity, while not entirely quashing technological changes and growth at the same time.
From a sociological point of view and psychological side of things, Lanier provides us with some gems of insights when dealing with the perils of social media. One of my favourite is this (page 180): "Young people announce every detail of their lives on services like Twitter not to show off, but to avoid the closed door at bedtime, the empty room, the screaming vacuum of an isolated mind".
This is a great book, small in size but not quality, providing the reader with much to ponder.








