What is it about?

We aim to learn what it takes to prevent harm in current day AI systems by learning from the history of system safety. This field has been tasked with safeguarding software-based automation in safety-critical domains such as aviation or medicine. Since the 1950s this field has grappled with the increasing complexity of automated systems, and drawn up concrete lessons for how to control for safety and prevent harms and accidents. These lessons build on the seminal work of system safety pioneer Professor Nancy Leveson.

Featured Image

Why is it important?

We are seeing a plethora of new harms and failures emerging from novel AI applications, often with vulnerable people bearing the brunt of ill-designed or ill-governed AI-technology, ranging from AI in automated decision making for welfare systems to self-driving cars or misinformation online. Meanwhile, many of the lessons from system safety of what it actually takes to build safe systems are yet to be integrated in the development and governance of AI systems. This presents both opportunities to build safer and more responsible systems, but foremost offers ways to contest and critique currently unsafe implementations to address and stop emergent forms of algorithmic harm.

Perspectives

For me personally, the stakes of AI systems are high, especially where these are establishing unsafe conditions in social domains, thereby often reifying historical power asymmetries. Examples are the emergence of digital apartheid in South Africa through AI-based surveillance systems, or the horrendous treatment of workers through algorithmic gig work platforms. In many ways, the harms that emerge are externalized, i.e. not treated as part of the AI system design. A system safety perspective could help empower those subjected to either ill-willed or unprincipled faulty AI systems, by better understanding how such harms emerge across both the technical, social and institutional elements in the broader context, thereby bringing responsible actors and powerful players into view. We have done this for aviation and medicine too: when things go terribly wrong, there are procedures to carefully understand what happened to prevent more harm in the future. Often times, a full spectrum of actors are involved, from the users and developers of a technology to supervisory bodies, auditors, regulators and civil society. This tells us that it often takes much more to responsibly and sustainably safeguard complex systems subject to software-based automation. As such, the lessons and tools from system safety are useful to understand what is needed to prevent new forms of harm in contexts where AI is relatively new, and to inform standards that the design and governance of such systems should meet, based on decades of experience.

Roel Dobbe
Technische Universiteit Delft

Read the Original

This page is a summary of: System Safety and Artificial Intelligence, June 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3531146.3533215.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page