- Hardcover: 320 pages
- Publisher: W. W. Norton & Company (August 30, 2016)
- Language: English
- ISBN-10: 039307899X
- ISBN-13: 978-0393078992
- Product Dimensions: 5.9 x 1.2 x 8.6 inches
- Shipping Weight: 1.2 pounds (View shipping rates and policies)
- Average Customer Review: 5 customer reviews
- Amazon Best Sellers Rank: #329,924 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
The Ethics of Invention: Technology and the Human Future
Use the Amazon App to scan ISBNs and compare prices.
Frequently bought together
Customers who bought this item also bought
“Jasanoff argues for an entirely new body of ethical discourse, going beyond technical risk assessment to give due weight to economic, cultural, social and religious perspectives. . . . Jasanoff thoughtfully discusses the limits of conventional risk analysis, with its biases in favour of innovation and quantification. . . . The book helps to pinpoint recurring patterns in contemporary technological debates and to frame what is at stake in their outcomes.”
- Steven Aftergood, Nature
“Impressively spanning advances in biomedicine, information technology, and green biotechnology, Jasanoff deftly draws out the social and political dramas of technological systems in a series of case studies, revealing how we attempt to steer new science and technology to create more equitable, sustainable, and prosperous societies. Jasanoff’s engaging prose brings essential and thoughtful attention to questions of justice, the limits of expert prediction, and the unwieldiness of responsibility in the 21st century.”
- Cynthia Selin, Science
“A remarkable book which brings government and technology into much-needed dialogue. Across disasters and designer babies, GMO crops and information technologies, Sheila Jasanoff expertly tracks the social and technological forces that shape our worlds. Drawing on the full range of her previous scholarship, she elegantly raises a number of profound questions concerning the possibilities for democratic control over technological forces which seem too fast, too complex and too unpredictable for our institutions to handle. Along the way, our very notion of democracy is extended, challenged and transformed.”
- Professor Alan Irwin, Department of Organization, Copenhagen Business School
“Not bewitched by technological promises, The Ethics of Innovation reclaims the future for human creativity. Sheila Jasanoff opens our eyes to the fact that societies are governed by technical systems as much as by the rule of law. And if we want to govern ourselves well, we need collective imaginations of the world we want to live in.”
- Professor Alfred Nordmann, Darmstadt Technical University
About the Author
Sheila Jasanoff is professor of science and technology studies at Harvard Kennedy School. She is the author of many books on technology, most recently Science and Public Reason and Designs on Nature. She lives in Cambridge, Massachusetts.
Discover books for all types of engineers, auto enthusiasts, and much more. Learn more
Top customer reviews
In the first chapter, SJ lays out her goal of showing that the following attitudes are fallacious:
(A) technological determinism: "the theory that technology, once invented, possesses an unstoppable momentum" (@14);
(B) the idea of "technocracy": the notion "that technological inventions are managed and controlled by human actors, [but that] only those with specialist knowledge and skills can rise to the task" (@19); and
(C) the excuse of "unintended consequences," which "implies that it is neither possible nor needful to think about the kinds of things that eventually go wrong" (@23).
The subsequent chapters are more or less organized by type of technology or institutional arrangement: risk assessment, disasters, genetic modification of food, genetic modification of people, IT, intellectual property law, and institutions for technology management. SJ considers a large number of historical cases, including the Bhopal tragedy, various misadventures with genetically modified foods, Internet oligarchs, the appropriation of cells from the body of Henrietta Lacks, and many others, as well as looking at institutional arrangements for dealing with them, e.g. in US patent jurisprudence, Indian tort jurisprudence, and the World Trade Organization's dispute settlement mechanism. Scattered throughout are many examples pertinent to themes (A)-(C), but nonetheless I felt the organization of chapters made those themes less salient. Had the chapters been focused on technological determinism, technocracy, etc. and used examples from a variety of technologies in each, maybe the polemical point of the book would have been more forcefully made.
So what was I expecting? A discussion grounded in philosophical ethics (utilitarianism, deontology, virtue ethics, etc.), and which applies to inventors and companies, not only to decisions about how ethics panels, courts, and other governmental or quasi-governmental institutions regulate technology. Aside from a passing reference to Kantian ethics in the context of "savior siblings" (@140; the situation is when parents conceive a second child in order to have a donor to help treat an older child's disease), philosophical ethics doesn't make an appearance until some passages of the final chapter. Most of the book is about policy, not about philosophy; it's very much a product of Harvard's Kennedy School of Government, located some distance away both physically and mentally from the Philosophy Department in Harvard Yard.
I was especially hoping to see more discussion of the precautionary principle. One of the most notable enemies of precaution is SJ's Harvard colleague Cass Sunstein, who used to be President Obama's "regulatory czar" — but SJ mentions him only in an entirely unrelated context. Precaution is a favorite punching bag of neoliberals everywhere, even in France; in Paris a couple of years ago I picked up a few books devoted entirely to that theme. But SJ devotes only a paragraph to explaining it, and no space to defending it. (For an excellent philosophical defense, albeit in the context of environmental protection rather than innovation, see Douglas Kysar's Regulating from Nowhere: Environmental Law and the Search for Objectivity, which isn't cited here.)
A related problem was that the cases in the book are entirely retrospective — there isn't any speculation about the sorts of hypotheticals that philosophical ethicists love to worry about (and which don't have to be as tedious as the infamous trolley problems). While she doesn't invoke Donald Rumsfeld's impatient explanation "Stuff happens!" to explain unintended consequences, SJ relies on his famous "folk epistemology" of "known knowns / known unknowns / unknown unknowns." She makes the excellent point that because "scientific" risk assessment is limited to known unknowns, it ignores what often is most troubling to people — the unknown unknowns. So it's ironic that her book does the same thing, or even sticks to known knowns (though arguably matters dealing with values aren't really "knowns"): every problem she mentions is illustrated with a case that occurred in the past.
A benefit of a more philosophical ethical approach is that an imaginative author can often pluck problems from the realm of the unknown and bring them into the realm of the known unknown, at least. Here are some junior high school-level examples: suppose some mad genius or company really invents robot warriors, or an AI that could bring about the "singularity," surpassing human intelligence by orders of magnitude — what are the ethical choices they ought to consider? Suppose someone invents some new creature or microbe in her bathtub and thinks it would be fun to release it into the wild? How do we deal with the fact that many inventors are loners, and far from the reach of governance institutions? In fact, people are really trying to do such things, so these aren't idle questions. As SJ herself points out, experts often cop out on these questions with expressions like "But that seems unlikely for now": she notes that "silenced in this account is the what-if question" (@252).
She's right, and yet I felt that this book often wound up doing the same thing, without even raising the questions it was dodging. Movies like Scarlett Johansson's "Her" and TV shows like "Person of Interest" actually do a better job of considering the ethical issues I was hoping to see more soberly discussed here. And even some here-and-now examples, such as private drones, military drones, and driverless cars, don't get analyzed — perhaps because there hasn't yet been a lawsuit about them.
Nonetheless, SJ does at least graze some of these issues when, near the end of the book, she comes up with the terrific expression "inequality of anticipation" (@256): this captures the idea that the world's poor are unjustly expected to accept that "what the rich invented to fit their circumstances remains the gold standard for what the poor should need and want, only with fewer features, less sensory appeal, and possibly less likelihood of serving as platforms for autonomous development" (@257). It's typical of the book's wonkish style, though, that at the end of this passage SJ cites to an academic article by another scholar in a volume SJ co-edited in 2015: apparently at the Kennedy School they don't read Ivan Illich, who set forth the same idea clearly and passionately in his classic Tools for Conviviality, more than 40 years earlier.
The book has endnotes, but no list of references. If you're a student of technology policy, you might find this book right up your alley. For me, though, the final chapter, with its brief flashes of ethical philosophy and energetic advocacy, was something like the introduction to the book I had been hoping to read.
Jasanoff convincingly argues that the future of a technology is shaped not only by the technology itself, but also by the decisions we make about how to use it, how to regulate it, and how to modify it. And those “decisions” can be either passive or active: Simply allowing a technology to develop without any meaningful democratic input is itself a decision, even if we don’t think of it that way. She argues that the question of how we manage new technologies is especially important today, because we’re confronting a host of new technologies that give those who use them enormous power. These include many of the same technologies Goldin and Kutarna celebrate, such as synthetic biology, gene therapy, cloning, and genetically modified foods.
We’re also facing new surveillance technologies that effectively make it possible for the government and private companies to track us 24/7, listen to our phone calls, and read our emails and text messages as a matter of course (how, after all, does Google serve up ads pegged to what’s in your emails?). The nature of patent law (in the case of science) and network effects (in the case of information) means that a very small number of companies could end up exerting a tremendous amount of control over these technologies. (In the case of what Jasanoff calls the data
oligarchs, such as Facebook and Google, they’re already exerting it.) If we are going to give CRISPR, a genome editing technique, the ability to reshape with enormous ease the very building blocks of life, it seems sensible to find mechanisms that allow a broad-based, democratic discussion about the risks and benefits of new technologies — as well as what values we want to govern the use of those technologies.
Such talk raises the hackles of many entrepreneurs, who think the only “governance” we need is to throw things out into the world and let the workings of the market sort them out. But although that process works fi ne in the case of, say, a new razor, it seems somehow inadequate for dealing with something like CRISPR. More important for business, Jasanoff correctly argues that if you don’t allow the public some voice at the beginning, you will end up having to deal with the public after the fact. She tells the story of golden rice, a genetically modified rice that includes elevated levels of beta-carotene (many children in the developing world are deficient in Vitamin A, which can lead to blindness;
and beta-carotene is a precursor to Vitamin A). Golden rice seems like a genuine boon to the developing world — an easy and cheap solution to a serious problem. Yet its introduction in those markets has been slowed, and in some cases blocked altogether, by the backlash against companies such as Syngenta and Monsanto and their role in pushing genetically modify ed seeds and crops. One obvious, and understandable,
response is simply to label the opposition to GMOs irresponsible and scientifically illiterate. But The Ethics of Invention makes a convincing case that creating formal ways to analyze and assess technologies and their proper use will offer us the best chance of finding a path between “unbridled enthusiasm and anachronistic Luddism.”
educational websites, Midwest Independent Research. Technology, mwir-technology.blogspot. There is a book list.