The specifications, analyses, reviews and testing carried out during the development of a safety-related system provide evidence of safety. The evidence is used to construct a justification that a system is sufficiently safe to be operated in a specified environment. The justification of safety, sometimes called a safety case, may be submitted to safety regulators for approval. Safety legislation may place detailed requirements on the scope and contents of a safety case. Since these requirements differ in different countries and industry sectors, we do not wish to depend upon a particular detailed prescription for a safety case. Instead, we focus on the safety argument.
A safety argument provides a link between the safety evidence and a safety claim, showing that the safety evidence is sufficient to support the claim'. The term 'safety argument' is sometimes used as a synonym for safety case. Here, we use safety argument to mean that part of the safety case that combines the safety evidence, showing that the evidence is sufficient to demonstrate that the system is acceptably safe. The use of the term safety argument for a specific part of the safety case is illustrated below. A typical safety case also references applicable standards and regulations, derives safety targets, gives an overview of the system and its operation, and contains or references safety evidence.
Standards such as draft IEC 61508 and assessment frameworks such as the GAM framework (developed by the CASCADE ESPRIT project) address the collection of safety evidence, related to both the development process and to the final product. Methods for combining diverse evidence into an overall safety justification have not been addressed in any previous safety standards, but were addressed by the SERENE Method.
A Safety Argument within a Safety Case
The kind of 'safety' evidence that arises when developing or assessing a critical system is characterised both by its diversity and its uncertainty. For example, the number of defects discovered during testing such a system will be dependent on uncertain factors like the number of inserted defects and the accuracy of the testing process. The individual factors are diverse in the sense that some (like number of defects) may be objectively measurable whereas others (like accuracy of testing) are more subjective. It is crucial to be able to combine such diverse types of evidence. For example, is assessing system reliability it is well known that it is not possible to assure reliability at very high levels using product failure data alone. However, it seems reasonable to believe that we could assure reliability at higher levels if we could incorporate not just the results from testing but also other evidence about the process and product. Bayesian Belief Nets (BBNs) provide the best and most mature quantitative formalism for combining diverse, uncertain information.