A recent MIT Sloan Management Review piece is making rounds on LinkedIn, and it should be uncomfortable reading for every leader still running incident reviews with a flip chart and a search for the guilty party. The article introduces a concept called narrative responsibility and uses Boeing’s decade-long safety crisis as a case study to demonstrate that the old model is broken.
The core argument is deceptively simple: once decisions are made by a mesh of humans, code, data, and organisational culture, the 1980s playbook of “find the culprit, fire them, move on” doesn’t just fail — it actively guarantees the next incident.
The Boeing Problem Is a Systems Problem
Boeing has now lost two CEOs in the wake of safety failures. After 346 people died across two 737 MAX crashes, a door plug blew off a mid-flight Alaska Airlines jet in January 2024. A federal audit found Boeing failed 33 out of 89 safety tests on its 737 MAX production line. The company paid $2.5 billion in fines and compensation and agreed to enhanced oversight — and the problems continued.
The traditional explanation? Poor leadership, misaligned incentives, regulatory capture, cost-cutting culture. All true. But all incomplete. Because in every case, the post-incident narrative converged on individuals: a fired executive, a dismissed engineer, a censured regulator. The systemic conditions that produced those individuals’ decisions were left intact.
That’s the pattern MIT Sloan is calling out. The investigators found a convenient causal account. They got closure. Boeing had another incident.
What Is Narrative Responsibility?
The framework, grounded in research published in MIS Quarterly, reframes accountability from a verdict to a story. It argues that forging genuine organisational learning requires a collaborative, honest revisiting of how decisions were made — not who made the “wrong” one.
Three principles define the approach:
- Map the real story — beyond the obvious. Post-incident reviews typically converge toward stable explanations that enable closure. Narrative responsibility resists that closure. It attends to what was contested, ambiguous, ignored, or silenced — the signals that were there but weren’t acted on, and why.
- Distribute ownership beyond blame. Who designed the system? Who set the incentives? Who approved the shortcuts? Who saw the warning and said nothing? Accountability is distributed across the sociotechnical web — not assigned to the individual at the end of the decision chain.
- Embed ongoing reflection into practice. This isn’t a post-mortem exercise. It’s a continuous practice of asking how collective activities, assumptions, and technologies are shaping decisions right now, before the incident happens.



Why AI Makes This Urgent Right Now
Boeing’s failures predate AI-enabled decision-making at scale. But they are a precise preview of what happens when organisations embed AI into operations without upgrading their accountability model first.
In 2026, many organisations operate with AI systems that act faster than human review allows — optimising outcomes continuously, shaping recommendations, filtering signals. When an AI-influenced decision causes harm, there is no single hand on the wheel. The outcome emerged from training data, model assumptions, human sign-off chains, and cultural shortcuts all interacting. As AI governance experts note, “accountability must be designed into AI systems, not reconstructed after failure.”
The “find the culprit” model collapses entirely in that environment. You can fire the data scientist. You can dismiss the product owner. You will not fix the system.
The Leadership Gap No One Is Talking About
The uncomfortable truth is that most leaders have not upgraded their accountability operating model. They are running a 1980s response playbook inside a 2026 sociotechnical system. The mismatch is widening every time another AI capability is added to a workflow without a corresponding governance update.
Here’s what the accountability upgrade actually requires:
- Signal inventories — documented records of warnings raised, flags ignored, and by whom, before incidents happen
- Incentive audits — honest mapping of what behaviours the system actually rewards versus what it claims to reward
- Responsibility by design — accountability assigned at the point AI systems are built, not reconstructed after they fail
- Narrative reviews — structured post-incident processes that resist the pull toward convenient single-cause explanations
- Psychological safety as infrastructure — because distributed accountability only works if people can surface contested truths without career risk
The Limits of the Framework
MIT Sloan is clear that narrative responsibility is not a panacea. The most obvious risk: if leadership controls the narrative, the framework can be weaponised to diffuse accountability rather than distribute it. “We all share responsibility” can become cover for “no one is responsible.” That is the corporate version of a story where no one is ever at fault.
The researchers are explicit: this framework must complement, never replace, legal and regulatory obligations. The EU AI Act exists for exactly this reason. Narrative responsibility is a learning and governance tool — not a legal defence.
The Practical Question for Leaders
After your next incident — technical, operational, or product — ask these questions before you call the review meeting:
- What signals existed before this failure, and who saw them?
- Which incentives made the shortcut look rational to the person who took it?
- Which part of the system — code, culture, process, data — made this outcome more likely?
- Who designed that part of the system, and were they in the room?
- What story will this review tell, and whose voice will shape it?
Boeing’s story keeps repeating because the review process keeps producing the wrong story. Blamestorming is operationally satisfying and strategically useless. If your accountability model cannot name the system conditions that produced the failure, you have not done accountability — you have done theatre.
The next incident is already being built. The question is whether your governance model will see it.
Sources: MIT Sloan Management Review, “Rethink Responsibility in the Age of AI” (April 2026); Boeing 2025 Chief Aerospace Safety Officer Report; Harvard Program on Negotiation, “Learning from Ethical Leadership Failures at Boeing” (2026); LogicManager, “Boeing’s Freefall Continues” (2025); AI Governance in 2026, Adeptiv.AI.

Leave a Reply