- Paperback: 292 pages
- Publisher: CRC Press; 2 edition (September 30, 2010)
- Language: English
- ISBN-10: 0754678342
- ISBN-13: 978-0754678342
- Product Dimensions: 6.1 x 0.7 x 9.2 inches
- Shipping Weight: 1.2 pounds (View shipping rates and policies)
- Average Customer Review: 6 customer reviews
Amazon Best Sellers Rank:
#228,901 in Books (See Top 100 in Books)
- #20 in Books > Engineering & Transportation > Engineering > Industrial, Manufacturing & Operational Systems > Ergonomics
- #83 in Books > Engineering & Transportation > Engineering > Industrial, Manufacturing & Operational Systems > Health & Safety
- #85 in Books > Science & Math > Technology > Safety & Health
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Behind Human Error 2nd Edition
Use the Amazon App to scan ISBNs and compare prices.
Frequently bought together
Customers who bought this item also bought
Customers who viewed this item also viewed
'This book, by some of the leading error researchers, is essential reading for everyone concerned with the nature of human error. For scholars, Woods et al provide a critical perspective on the meaning of error. For organizations, they provide a roadmap for reducing vulnerability to error. For workers, they explain the daily tradeoffs and pressures that must be juggled. For technology developers, the book offers important warnings and guidance. Masterfully written, carefully reasoned, and compellingly presented.'
Gary Klein, Chairman and Chief Scientist of Klein Associates, USA
'This book is a long-awaited update of a hard-to-get work originally published in 1994. Written by some of the world's leading practitioners, it elegantly summarises the main work in this field over the last 30 years, and clearly and patiently illustrates the practical advantages of going 'behind human error'. Understanding human error as an effect of often deep, systemic vulnerabilities rather than as a cause of failure, is an important but necessary step forward from the oversimplified views that continue to hinder real progress in safety management.'
Erik Hollnagel, MINES ParisTech, France
'If you welcome the chance to re-evaluate some of your most cherished beliefs, if you enjoy having to view long-established ideas from an unfamiliar perspective, then you will be provoked, stimulated and informed by this book. Many of the ideas expressed here have been aired before in relative isolation, but linking them together in this multi-authored book gives them added power and coherence.'
James Reason, Professor Emeritus, University of Manchester, UK
'This updated and substantially expanded book draws together modern scientific understanding of mishaps too often simplistically viewed as caused by 'human error'. It helps us understand the actions of human operators at the "sharp end" and puts those actions appropriately in the overall system context of task, social, organizational, and equipment factors. Remarkably well written and free of technical jargon, this volume is a comprehensive treatment of value to anyone concerned with the safe, effective operation of human systems.'
Robert K. Dismukes, Chief Scientist for Aerospace Human Factors, NASA Ames Research Center, USA
'With the advent of unmanned systems in the military and expansion of robots beyond manufacturing into the home, healthcare, and public safety, Behind Human Error is a must-read for designers, program managers, and regulatory agencies. Roboticists no longer have an excuse that the human 'part' isn't their job or is too esoteric to be practical; the fifteen premises and numerous case studies make it clear how to prevent technological disasters.'
Robin R. Murphy, Texas A&M University, USA
'It is rare to come across the definitive book on any complex subject. But in the case of understanding the nature of "human error" this book is surely it. It is hard to think of any other volume that comes close. That is perhaps not that surprising given the history of the book, the eminence of the five authors, and the intellectual, industrial and academic traditions they come from. Nonetheless, it is a major achievement.'
Human Factors & Ergonomics Society European Chapter Newsletter, June 2011
'Theories in the book can drive the culture towards healthy just culture and understanding of possible real challenges behind the visible operational errors.' --FinnAir Safety No1, 2012
The book s authors recognise they don t have all the answers and, indeed, left me feeling I was now asking more questions than when I began reading. But they did include a 10-point checklist at the end on constructive responses when you see the chance to improve safety, which I found useful.
This is an interesting textbook and while it is difficult in places, I think it is essential reading for those designing or operating complex systems. --Health and Safety at Work, December 2010
About the Author
David D. Woods, Ph.D. is Professor at Ohio State University in the Institute for Ergonomics and Past-President of the Human Factors and Ergonomics Society. He was on the board of the National Patient Safety Foundation and served as Associate Director of the Veterans Health Administration's Midwest Center for Inquiry on Patient Safety. He received a Laurels Award from Aviation Week and Space Technology (1995). Together with Erik Hollnagel, he published two books on Joint Cognitive Systems (2006). Sidney Dekker is Professor and Director of the Key Centre for Ethics, Law, Justice and Governance at Griffith University in Brisbane, Australia. Previously Professor at Lund University, Sweden, and Director of the Leonardo Da Vinci Center for Complexity and Systems Thinking there, he gained his Ph.D. in Cognitive Systems Engineering from The Ohio State University, USA. He has worked in New Zealand, the Netherlands and England, been Senior Fellow at Nanyang Technological University in Singapore, Visiting Academic in the Department of Epidemiology and Preventive Medicine, Monash University in Melbourne, and Professor of Community Health Science at the Faculty of Medicine, University of Manitoba in Canada. Sidney is author of several best-selling books on system failure, human error, ethics and governance. He has been flying the Boeing 737NG part-time as airline pilot for the past few years. The OSU Foundation in the United States awards a yearly Sidney Dekker Critical Thinking Award. Richard Cook, M.D. is an active physician, Associate Professor in the Department of Anesthesia and Critical Care, and also Director of the Cognitive Technologies Laboratory at the University of Chicago. Dr. Cook was a member of the Board of the National Patient Safety Foundation from its inception until 2007. He counts as a leading expert on medical accidents, complex system failures, and human performance at the sharp end of these systems. Among many other publications, he co-authored A Tale of Two Stories: Contrasting Views of Patient Safety. Leila Johannesen, Ph.D. works as a human factors engineer on the user technology team at the IBM Silicon Valley lab in San Jose, CA. She is a member of the Silicon Valley lab accessibility team focusing on usability sessions with disabled participants and accessibility education for data management product teams. She is author of The Interactions of Alicyn in Cyberland (1994). Nadine Sarter, Ph.D. is Associate Professor in the Department of Industrial and Operations Engineering and the Center for Ergonomics at the University of Michigan. With her pathbreaking research on mode error and automation complexities in modern airliners, she served as technical advisor to the Federal Aviation Administration's Human Factors Team in the 1990's to provide recommendations for the design, operation, and training for advanced 'glass cockpit' aircraft and shared the Aerospace Laurels Award with David Woods.
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
Below are 3 ideas from the book that I found particularly useful and insightful.
#1: Goal Conflicts:
"Perhaps the most common hazard in the analysis of incidents is the naive assessment of the strategic issues that confront practitioners."
When investigating a failure, it is crucial to recognize that system operators are often dealing with multiple, competing goals. Operators must regularly assess and resolve these goal conflicts by making trade-off decisions which necessarily involve risk, and often these decisions frequently must be made under time pressure. Operators are normally able to skillfully and successfully balance these conflicting goals and risks as part of their daily routine. Failures occur when the risks are unsuccessfully balanced, but that does not mean the operators were not skillful. In many cases, the actions which led to failure were the exact same actions which previously led to to success. As investigators, we need to fully understand these goal conflicts in order to avoid hindsight bias and in order to improve the team's strategies for assessing and balancing risks. Also, it is important to understand that a team's strategies for balancing risks must evolve as do changes in the operating context, and in the related goals and risks. Putting this into practice, my team recently had a postmortem discussion about an outage that involved unplanned changes to a production system. During the discussion, it was enlightening to discuss our goal conflicts and our decision-making process around making unplanned changes. We determined that eliminating unplanned changes was not the right course of action - in fact unplanned changes are sometimes essential. Instead we refined our decision making process for unplanned changes to the production system; and acknowledging that our operating context is subject to change, we set a checkpoint to re-assess this process 3 months later.
#2: Distancing through Differencing
"Do not discard other events because they appear on the surface to be dissimilar. At some level of analysis, all events are unique; while at other levels of analysis, they reveal common patterns."
"The obstacles to learning from failure are nearly as complex and subtle as the circumstances that surround a failure itself. Because accidents always involve multiple contributors, the decision to focus on one or another of the set, and therefore what will be learned, is largely socially determined."
I was fascinated by one of the book's case studies, that of a chemical fire that occurred during routine machine maintenance in a high-tech product manufacturing plant in the US. This company was one that took safety very seriously, with good working conditions, significant investment in safety, and strong motivation to examine all accidents promptly and thoroughly.
"The manufacturer had an extensive safety program that required immediate and high-level responses to an incident such as this, even though no personal injury occurred and damage was limited to the machine involved. High-level management directed immediate investigations, including detailed debriefings of participants, reviews of corporate history for similar events, and a “root cause” analysis. Company policy required completion of this activity within a few days and formal, written notification of the event and related findings to all other manufacturing plants in the company. The cost of the incident may have been more than a million dollars."
The company's investigation of this accident focused on the machine, the maintenance procedures, and the operators who performed the maintenance and identified multiple deficiencies that were corrected quickly. The fascinating part of this case study was that a broader review by outside investigators found that a very similar chemical fire had occurred in one of the company's other manufacturing plants in another country earlier that same year, and that this prior event was well known by practitioners at the US plant. Both the practitioners and the internal investigators considered the prior event to be irrelevant because it had occurred in a non-US plant with a different safety system to contain fires and involved a different model of the machine. Later, the accident occurred again in the US plant, this time during a different shift, and this third event was rationalized as having been due to lower skill level of the workers in that shift. The authors use the term "Distancing through Differencing" to label the tendency of organizations and individuals to distance ourselves from failures (i.e. "that could never happen here"). My takeaway is that there is a great opportunity across the many teams now providing cloud services in enterprise IT organizations such as my own to share details about each other's failures, look for the general patterns, and avoid repeating those incidents that have occurred within other services.
#3: Design-induced failures
"Automation surprises begin with miscommunication and misassessments between the automation and users, which lead to a gap between the user’s understanding of what the automated systems are set up to do, what they are doing, and what they are going to do."
The book contains several chapters devoted to the ways in which the design of computer systems used by operators can induce failures. These chapters detail several different aspects of this issue which is vitally important to enterprise IT as both a technology provider and as a technology consumer. Among the many points raised here was that automation often introduces new burdens on the same operators that it is intended to assist. I have seen this principle in action when teams implement automation to accomplish manual tasks but unfortunately do so in a way that does not provide users/operators with sufficient feedback to understand what is going on when it doesn't work. This is an example of automation is written without regard for the users, and it can add significant complexity and brittleness to the system.
The idea that error is a discrete, scientific category of human performance is finally buried by this work. Although better human performance researchers have known and accepted this for more than a decade, the message has percolated the larger research community slowly -- mostly for want of a single text that covers the history and experience that lead to this conclusion.
Still, many people will continue to misunderstand the nature of "error" as the first review of this book demonstrates.
This is a perfect book for a class on human performance assessments that stray into the thicket of troubles that surround the term "error".