Summary of Accident Models

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 00:30:00

This video discusses how accidents can be viewed as complex processes, and how this understanding can help to prevent them. It points out that the old paradigm of accident analysis relies on event analysis and on the assumption that most accidents are caused by operator error. This video suggests that a better understanding of accidents can be gained by looking at the entire sociotechnical system, and by assigning blame to the system rather than the operator.

  • 00:00:00 In this lecture, various accident models will be discussed, including the failure modes and effects analysis model, the swiss cheese model, and the bow tie model. These models help structure our reasoning and are not used for computing and calculating specific risk estimates, but instead provide a paradigm and lens for understanding real world hazards and events. An early model is the failure modes and effects analysis model, which involves cataloging many failure modes and their severities, occurrence probabilities, and detection probabilities. The first step is to identify various failure modes, and then to identify the potential adverse effects from those failure modes. For each effect, we need to identify how severe that failure or that effect can be. We then need to look back at what are some potential root causes of that failure mode. For each of these root causes, we need to estimate what's the probability of it actually happening. Lastly, we need to identify various controls and anomaly indicators. What are ways that we can work against these root causes and what are ways that we can detect whether the failure mode occurs or whether the root cause occurs. This is how layers can work together to ultimately cut down risk. The bow tie model raises an important distinction: between preventative barriers and protective barriers. A preventative barrier prevents initiating
  • 00:05:00 In this video, the presenter discusses accident models, noting that they are simplistic compared to complex systems. He goes on to discuss how linear causality can simplify an accident's story, and how complex systems often involve multiple causes and effects. He points out that root cause analysis is more fruitful when looking at factors that contributed to an accident, rather than searching for the single cause or component that is ultimately responsible.
  • 00:10:00 The video discusses the concept of reductionism and how it can sometimes lead to inaccurate conclusions. It goes on to talk about the concept of emergence, which is when a system's properties emerge from the interactions between its parts. The video then provides an example of how a complex system can be defined, and explains how emergence can lead to complex collective behavior.
  • 00:15:00 The systems bible principle that a complex system's failure mode cannot ordinarily be predicted from its structure suggests that making deep learning systems safer will require understanding how these systems fail and how to prevent them from doing so.
  • 00:20:00 Systems thinking is useful for understanding the safety of complex systems, and can complement reductive analysis. Pressure and performance pressures feed into each other, and can lead to unsafe outcomes.
  • 00:25:00 System safety is a complicated undertaking, and there are many factors that can affect it. In this lecture, Professor Nancy Levison provides a model of system safety that includes both events and systemic factors. She emphasizes the importance of constantly monitoring a system to ensure it remains safe. Stamp also emphasizes the need to design a system in a way that minimizes risk and incorporates diffuse and indirect risks.
  • 00:30:00 The video discusses the difference between assumptions made by the old paradigm and the system's view of safety. The old paradigm relies on event analysis, while the system's view tries to understand accidents as complex processes involving the entire sociotechnical system. The old paradigm says most accidents are caused by operator error, but the system's view says this is actually a product of the environment. The old paradigm also assumes that assigning blame is less necessary to learn from and prevent accidents, while the system's view says that systems tend to migrate toward states of higher risk and that different ways of interpreting systems are necessary for understanding their safety.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.