Most safety cases fail not because of poor engineering, but because they are treated as documentation exercises rather than structured arguments. The distinction matters more than most teams realize.
A safety case is not a compliance artifact. It is a claim — specifically, the claim that a system is acceptably safe for a defined use in a defined context. Everything else in the safety case is evidence and argumentation assembled to support that claim. When teams treat it as a document to be written rather than an argument to be made, the structure collapses before it starts.
The Documentation Trap
The documentation trap looks like this: a team assembles a large collection of test results, hazard analyses, FMEA outputs, simulation logs, and verification records, binds them together with a table of contents, and calls it a safety case. The content may be technically sound. The documentation may be exhaustive. But the safety case is missing — because no one has made the argument that connects the evidence to the claim.
This is not a semantic distinction. A collection of evidence without a structured argument cannot be evaluated, challenged, or improved. A regulator, a customer, or an internal review board cannot look at a pile of test logs and determine whether the system is acceptably safe. They can only determine whether a large number of tests were run.
The safety case has to answer a specific question: given this claim about safety, here is the argument for why we believe it is true, and here is the evidence that supports each step in that argument. When the argument is explicit, it can be examined. When it is implicit — buried in the assumptions of the team that produced the evidence — it cannot.
What a Safety Case Actually Is
A well-structured safety case begins with a top-level safety claim: the system will not cause harm of type X in context Y under conditions Z. It then decomposes that claim into sub-claims, each of which is either supported by evidence directly or further decomposed. The decomposition continues until every leaf node in the argument is supported by concrete evidence.
This structure, sometimes called a goal-structured notation, does several things that documentation alone cannot. It makes the argument visible. It makes the assumptions explicit. It shows exactly where evidence is thin, where claims are unsubstantiated, and where the argument would collapse under challenge.
It also changes the nature of the work. When the argument structure is explicit, you know what evidence you need before you collect it. You design tests, analyses, and verification activities to answer specific questions in the argument, rather than collecting evidence speculatively and hoping it adds up to something defensible later.
The teams that find themselves perpetually months from deployment are almost always teams that collected evidence before making the argument. They have plenty of data and no case.