The crash of Trans World Airlines flight 514 into a mountain Virginia in 1974 helped lead to the establishment of the Aviation Safety Reporting System, which shares safety critical information across the industry and prevents avoidable serious incidents. Could this approach apply to near-misses in the world of AI? Image courtesy of Bureau of Aircraft Accidents Archives.

How to deal with an AI near-miss: Look to the skies

By Kris Shrishak, May 9, 2023

https://thebulletin.org/wp-content/uploads/2023/05/TWA-514-aftermath-150x150.png
https://thebulletin.org/wp-content/uploads/2023/05/TWA-514-aftermath-150x150.png

The crash of Trans World Airlines flight 514 into a mountain Virginia in 1974 helped lead to the establishment of the Aviation Safety Reporting System, which shares safety critical information across the industry and prevents avoidable serious incidents. Could this approach apply to near-misses in the world of AI? Image courtesy of Bureau of Aircraft Accidents Archives.

In 2017, as wildfires were raging in California, people were fleeing their homes in search for safety. Many relied on navigation apps on their mobile phone. As they drove a few minutes, they realized that the navigation app was directing them towards the wildfire (Graham and Molina 2017), not away from it. They ditched the app and found their way to safety. They were assisted by a cop who prevented them from driving into the fire. They had a close call, or a near-miss, that could have resulted in a serious incident.

These navigation apps use artificial intelligence (AI) to identify and suggest the shortest or the quickest route to a destination. One of the criteria used includes how crowded a street is. A navigation app is more likely to suggest an empty street than a crowded one. This criterion for routing is usually useful.

Except when it is not.

A path can be empty because there is a fire. Or because it is the sea and not a street (Fukita 2012), as three Japanese tourists in Australia learned. Those tourists had a near-miss in 2012, but they survived. (Nine years later, a man in India drove into a water body. He did not survive [CNN News18 2021]).

A near-miss is an event that could have caused significant harm, such as serious injury or death, but did not. A serious incident and a near-miss only differ in their outcome. If the circumstances were slightly different, a near-miss would have been a serious incident. Near-misses occur more often than serious incidents; the factors resulting in near-misses are the same as serious incidents; and near-misses provide information to identify errors and fixing them (Thoroman, Goode and Salmon 2018).

In the case of AI systems, the risks go beyond serious injury or death. Mundane use of AI systems such as navigation apps can have safety implications, but the harms don’t stop there. AI systems can harm the fundamental rights of people. AI systems have contributed to wrongful arrests (Hill 2020), enabled housing discrimination (US Department of Justice 2022), and racial and gender discrimination (BBC 2020). These harms can ruin the lives of people. Such harms should be treated as serious incidents.

 

Ongoing efforts are insufficient

There is a growing realization around the world that AI systems need to be regulated. The European Union (EU) has proposed a draft regulation: the AI Act (European Commission 2021). This would regulate AI applications that are considered “high-risk.” Among the various chapters and sections of this draft regulation, there is a requirement that serious incidents that put lives in danger or disrupt the operation of critical infrastructure should be reported. If the regulation is passed, such reporting would be an obligation on any company—including those based in the United States—selling high-risk AI systems in the EU.

Reporting and documenting serious AI incidents is an important step. Serious incidents logged in a database can help regulators monitor patterns and take adequate action.

But this would not go far enough. Such a database would fail to record near-misses that could have helped prevent serious incidents—which is a mistake, because the reporting of serious incidents and near-misses are complementary.

Outside the regulatory arena, a cooperative between industry and a non-profit has set up a publicly accessible AI incident database in 2020 (McGregor 2021). This database was set up to document AI failure, and allows volunteers to manually submit publicly reported incidents, including near-misses. Editors then assess the submissions and decide whether to add the submission to the database. Each added incident is described and supported by links to media articles.

There is a multi-stakeholder effort in the Organisation for Economic Co-operation and Development (OECD) to establish a common framework for global incident reporting that aims to learn from past incidents, including near-misses. Similar to the AI incident database, the OECD relies on consolidating media articles into a database. However, instead of manual addition of incidents to the database, the OECD intends to automate the population of the database (Plonk 2022).

RELATED:
Wargames and AI: A dangerous mix that needs ethical oversight

These efforts are laudable. But they are limited in what they can achieve. As the OECD states, their goals include informing AI risk assessments, AI foresight work, and regulatory choices (Plonk 2022). Preventing AI incidents is not one of the goals, perhaps due to the limitations of their approach, which relies on media articles. Inspired by the aviation safety reporting system (Aviation Safety Reporting System n.d.), the AI incidents database aspires to prevent repeated AI failures, but is equally limited to merely cataloging media reports of AI incidents.

 

Documenting near-misses

Ongoing efforts at documenting and learning from AI incidents will benefit from understanding the benefit of near-miss reporting systems from complex realms such as aviation, which has been successful in reaping the benefits of such reporting systems to improve flying safety. Just like what happened in the air, near-miss reporting systems can help AI on the ground, by improving existing systems and addressing weaknesses—such as the interaction of more than one factor to cause a malfunction.

Because often, a system malfunction may not be due to a single cause. Instead, there could be multiple underlying factors that contribute to an incident that occurs because the developers did not foresee how they would combine to cause an AI system to fail. Often, these incidents are detected before they become serious. However, they don’t get reported outside the company. Therefore, information on how a serious incident was prevented is of critical importance. When these successful detections and the fixes are logged in a near-miss reporting system, other companies can learn and prevent future incidents.

Learning from our own direct experiences might seem to be enough, but when lives and dignity are on the line, we should also learn from the experiences of others. In December 1974, Trans World Airlines flight 514 crashed into Mount Weather, Virginia, while approaching Washington’s Dulles airport, 28 miles away. All onboard were killed. It was later learned that a United Airlines flight in September 1974 had narrowly missed the same mountain on its approach to Washington Dulles (Aviation Safety Reporting System, ASRS: The Case for Confidential Incident Reporting Systems 2001). The pilots of the United Airlines flight reported this internally and this information was passed onto other pilots of the same airline, but not to other airlines. Had this information been available to the pilots of Trans World Airlines flight 514, their fate might have been different. This recognition contributed to the creation of the Aviation Safety Reporting System which shares safety critical information across the industry and prevents avoidable serious incidents.

An AI near-miss at one AI company could have important learnings for many others. In addition to companies developing similar products, components of AI systems are often used across the industry. For example, the software product known as “TensorFlow” is a widely used library that supports various algorithms. Many applications are built using such libraries. A near-miss that involves such a library could have ramifications for numerous companies. Cooperation through a near-miss reporting system would be a boon for the industry to share information and to fix the problems. Most importantly, such information sharing can provide reasons for why something went wrong and how a serious incident was prevented.

 

Principles and properties of near-miss reporting

There are many lessons that can be adapted from the Aviation Safety Reporting System to AI near-miss reporting systems. For one thing, the entity that operates and maintains such a system is critical to its success. The Aviation Safety Reporting System is run by an independent third-party, NASA, that is trusted by the aviation industry and the aviation regulator, Federal Aviation Administration. (A regulator should not run such a system because users would likely be discouraged from submitting reports, for fear of punishment.)

RELATED:
Can't quite develop that dangerous pathogen? AI may soon be able to help

The system allows anyone involved in aviation operation—including pilots, cabin crew and ground staff—to submit reports to the Aviation Safety Reporting System which are then processed by the system’s staff. The lesson to be learned from this is that for an AI near-miss reporting system to succeed, the reports should not be limited to those gathered from the media; instead, developers, designers, and deployers of AI systems should be able to report near-misses. These actors have the most access to the AI systems and can observe when things go wrong and why.

Even a well-run system that allows all relevant actors to participate will not be successful if it does not provide the right incentives. While companies, especially smaller ones, can learn from the near-misses of other companies, reputational harm and potential financial loss may dissuade them from reporting their own near-misses. These are important concerns that must be addressed to establish a successful near-miss reporting system.

Consequently, an AI near-miss reporting system should have at least four properties to encourage AI actors to submit reports.

First, near-miss reporting should be voluntary. A near-miss reporting system helps capture issues that are not reported to a mandatory serious incidents reporting system. A near-miss reporter is thus contributing to a safer AI ecosystem. Such a contribution takes time and effort, and should not be made mandatory.

Second, near-miss reporting should be confidential. The near-miss report published in the public database should not contain any identifiable information so that there are no unnecessary negative repercussions for the reporter. This allows the reporter to answer why there was a failure, whether an unforeseen circumstance occurred, or whether a human made a mistake. All of these details are important to address problems without delay, but might go unreported if confidentiality is not guaranteed.

Third, there should be a clear immunity policy to guide near-miss reporters. The reporter should receive limited immunity from fines for their efforts to report near-misses. Regulators should be considerate of the reporter’s contribution to the database in case a serious incident takes place. When a reporter submits a report, they should receive a proof of submission that they can use. Such a proof can be generated before all identifiable information is removed and the report is made confidential by the maintainers of the database. (This is also an important reason for the database to be maintained by a trusted third party, and not a regulator.)

Finally, the reporting system should have a low bureaucratic barrier. The format of the report should be simple and accessible so that it takes minimal time and effort for the reporter. Ease of reporting is essential for such a system to succeed.

Documenting and publishing near-misses would help AI system developers and regulators avoid serious incidents. Instead of waiting for major failures before problems are addressed, disasters could be prevented. What we need is an incident database where developers and users of AI system voluntarily add incidents, including near-misses. To make such a database useful and to create an ecosystem where safer AI systems are prioritized, the database should have regulatory support. Privately run databases do not have the regulatory support that is required to give operators of AI systems the incentive to report their own near-misses.

If there is one thing that should not be replicated from other sectors, it is to wait decades before setting up and incentivizing AI near-miss reporting. It is never too soon to setup such a database. Now is the right time.

References

Aviation Safety Reporting System. n.d. https://asrs.arc.nasa.gov/.

Aviation Safety Reporting System. 2001. “ASRS: The Case for Confidential Incident Reporting Systems.” https://asrs.arc.nasa.gov/docs/rs/60_Case_for_Confidential_Incident_Reporting.pdf.

Graham, J., and Molina, B. 2017. “California fires: Navigation apps like Waze sent commuters into flames, drivers say.” December 7. CNBC.  https://www.cnbc.com/2017/12/07/california-fires-navigation-apps-like-waze-sent-commuters-into-flames-drivers-say.html.

CNN News18. 2021. “Man Drowns in Maharashtra as Google Maps Leads Him into a Dam With No Proper Signage.” January 13. https://www.news18.com/news/auto/man-drowns-in-maharashtra-as-google-maps-leads-him-into-a-dam-with-no-proper-signage-3283736.html.

European Commission. 2021. “Proposal for a Regulation of the European parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts.” https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206.

Fukita, A. 2012. “GPS Tracking Disaster: Japanese Tourists Drive Straight into the Pacific.” March 16. ABC News. https://abcnews.go.com/blogs/headlines/2012/03/gps-tracking-disaster-japanese-tourists-drive-straight-into-the-pacific/.

Hill, K. 2020. “Wrongfully Accused by an Algorithm.” June 24. The New York Times. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html.

Maryam, A. 2020. “UK passport photo checker shows bias against dark-skinned women.” October 8.  BBC. https://www.bbc.com/news/technology-54349538.

McGregor, S. 2021. “Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database.” Proceedings of the AAAI Conference on Artificial Intelligence 35 (17): 15458-15463. doi: https://doi.org/10.1609/aaai.v35i17.17817.

Plonk, A. 2022. “Developing a framework for AI incident reporting, and an AI Incidents Monitor (AIM).” OECD. https://www.oecd.org/parliamentarians/meetings/ai-meeting-november-2022/Plonk-Audrey-Developing-a-framework-for-AI-incident-reporting-and-an-AI-incidents-monitor-AIM-07-11-2022.pdf.

Thoroman, B., Goode, N. and Salmon, P. 2018. “System thinking applied to near misses: a review of industry-wide near miss reporting systems.” Theoretical Issues in Ergonomics Science 712-737.

US Department of Justice. 2022. “Justice Department Secures Groundbreaking Settlement Agreement with Meta Platforms, Formerly Known as Facebook, to Resolve Allegations of Discriminatory Advertising.” June 21. https://www.justice.gov/opa/pr/justice-department-secures-groundbreaking-settlement-agreement-meta-platforms-formerly-known

Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.


Get alerts about this thread
Notify of
guest
0 Comments
Inline Feedbacks
View all comments