Lessons from the Air France 447 Disaster
What is it about?
Organizations, particularly those for whom safety and reliability are crucial, develop routines to protect them from failure. But even highly reliable organizations are not immune to disaster and prolonged periods of safe operation are punctuated by occasional catastrophes, a phenomenon that has been labelled the “paradox of almost totally safe systems”, which expresses the idea that systems that are very safe under normal conditions may be vulnerable under unusual ones. In this paper, we analyse the loss of Air France 447. We show that an initial, relatively minor limit violation (a brief loss of speed indications) set in train a cascade of human and technological limit violations, with catastrophic consequences. Focusing on cockpit automation, we argue that the same measures that make a system safe and predictable also restrict cognition, which over time, inhibits and erodes the disturbance handling capability of a system’s operators. We also note limits to cognition in design processes that make it diﬃcult to foresee complex interactions, thereby creating system vulnerabilities. We discuss the implications of our ﬁndings for predictability and control in other contexts and explore ways in which these problems might be addressed.
The following have contributed to this page: Professor NIck Oliver, Thomas Calvard, and Kristina Potočnik