McDonough School of Business
News Story

U.S. Airlines Learning from Accident Close Calls, But They Can Try Harder, Study Finds

Travel via large U.S. commercial airlines is among the world’s safest transportation modes, but airlines may be lulled into a false confidence when they interpret averted collisions and other near-misses as proof that their safety systems are satisfactory, according to a team of expert risk analysts.

In calling for stricter scrutiny of near-misses as indicators that safety systems may need improvement, the researchers draw on research that finds near-misses “may masquerade as success, and apparent success tends to breed complacency” because an organization’s decision makers institutionalize established practices and routines and “reduce organizational search activities aimed at identifying further system improvements.” The International Civil Aviation Organization defines near-miss incidents as “occurrences, other than accidents, associated with the operation of an aircraft that affected or could have affected the safety of operation,” or, the analysts note, “outcomes that could have been worse but for the intervention of good fortune.”

Risk analysts Peter Madsen of Brigham Young University and Robin L. Dillon and Catherine H. Tinsley of Georgetown University, McDonough School of Business, present their data and analysis in a new paper, “Airline Safety Improvement through Experience with Near-Misses: A Cautionary Tale,” published in the online version of Risk Analysis, a publication of the Society for Risk Analysis.

Prior research, much of it conducted by Dillon and Tinsley, has shown that “people have a natural tendency to see near-misses as successes rather than as indicators that something is wrong,” says Madsen. He explains that the new research, using data from 1990-2007, shows that airlines successfully learn from near-misses when two conditions are met. First, the near-miss falls into a recognized category, and, second, that category is recognized to have previously caused accidents. But if near-misses “don’t fit into a recognized category or fit into a category that isn’t currently seen as particularly dangerous,” says Madsen, airlines may be squandering an opportunity to collect useful, safety-relevant information that could be gained from those other types of near-misses. Additional efforts to collect and use this information could yield further safety improvements, the study suggests.

As a “clear example of ‘normalization of deviance,’ in which a risky behavior becomes commonplace over time because there are no apparent negative consequences, the authors cite the results of an investigation into Southwest Flight 1455. On March 5, 2000, Flight 1455 crashed through a 14-foot-high metal blast fence at the end of runway 8 at Burbank, California, crossing a street and stopping near a gas station. National Transportation Safety Board accident investigators learned that the flight pilots used a “slam dunk” landing involving a steep and fast approach that involved taking risks. That approach had previously worked, even though runway 8 is relatively short for a Boeing 737. Investigators concluded that the accident “would probably not have happened even if everything else was the same” except the runway had been longer. “For airlines to continue to improve safety, the industry needs to attend to the yet undiscovered or unrecognized risks in the system without waiting for an accident to bring attention to them,” the authors note.

Learning from near-misses is challenging because they are subject to interpretation either as proof of success or as evidence of systemic vulnerability. The authors cite one study that examined four airports equipped with advanced technology for monitoring surface movements of aircraft and vehicles at airports. Analysis of one month’s surveillance data showed that, during the month examined, no taxiway collisions occurred between two aircraft at any of the four airports, but “there were situations where aircraft were considered to be within 3 seconds of possible collision.” This situation occurred 8 times at John F. Kennedy International Airport (out of 148,883 aircraft interactions), 6 times at Minneapolis-St. Paul International Airport (MSP) (out of 64,607 interactions), 5 times at Memphis International Airport (out of 16,433 interactions), and 21 times at Hartsfield-Jackson Atlanta International Airport (ATL) (out of 253,360 interactions). Aircraft are in near collision conditions at ATL 31 percent more often than at MSP. “Should this be interpreted as ATL operations are at greater risk for a collision in the future if current practices continue or as ATL operations are more skillful at having aircraft close together without problems?” the authors ask. Even though massive amounts of data have been gathered since the late 1990s, and better technologies are increasingly available to airlines and air traffic control systems, “the human problem of cognitive error remains,” the authors state.

Risk Analysis: An International Journal is published by the nonprofit Society for Risk Analysis (SRA), an interdisciplinary, scholarly, international society that provides an open forum for all who are interested in risk analysis, a critical function in complex modern societies. Risk analysis includes risk assessment, risk characterization, risk communication, risk management, and risk policy affecting individuals, public- and private-sector organizations, and societies at a local, regional, national, or global level. www.sra.org

The complete study is available at http://onlinelibrary.wiley.com/doi/10.1111/risa.12503/abstract