Safety Assurance: Stringent safety proofs for AI

Cyber-physical cognitive systems are constantly evolving and penetrating more and more areas of life. Increased autonomy, the use of artificial intelligence, and networking to form "open systems of systems" make these systems increasingly complex. This also makes it more difficult to prove that they are safe, i.e. free of unacceptable risks.

For this reason, the Fraunhofer Institute for Cognitive Systems IKS is researching new definitions of acceptable risk assessment and convincing safety assurance for such often AI-based cognitive systems.

What is Safety Assurance? When is a system safe enough?

Safety assurance refers to the process of comprehensively proving the safety of a system. The first question to be asked is: When is a system or artificial intelligence safe enough? What requirements must the system fulfill and how can the fulfillment of these requirements be demonstrated? A safety assurance case is then the structured, continuous argumentation chain for proving safety.

Safety Assurance Case: How does the safety of a system become verifiable?

A safety assurance case can be used to prove the safety of a system. The safety assurance cases researched and developed by Fraunhofer IKS are based on a consistent and stringent argumentation supported by analytical procedures to prove their validity. Safety assurance cases formalize the safety-relevant properties of the system and include the uncertainty assessment of the environment as well as the technical components and interactions of a system. This systematic approach of the safety assurance case can ensure that the system meets the predefined requirements and thus has a low risk of failure.

In addition to developing safety assurance cases, Fraunhofer IKS extends existing safety analysis and fault modeling techniques. For this purpose, the institute also includes causal factors in the risk assessment, which result from the increasing complexity and uncertainty of the system. On this basis, Fraunhofer IKS defines measures for risk minimization and evaluates their effectiveness.

These systematic approaches to the verification of safety-relevant properties of cognitive systems are used both at the level of the overall system and for the safety verification of individual AI- and ML-based functions.

Safety Assurance from an interdisciplinary perspective

Fraunhofer IKS takes an interdisciplinary approach to safety assurance and incorporates results from the two other main research areas "Trustworthy AI" and "Resilient Software Systems" into the development of safety assurance approaches.

Further research topics


Trustworthy AI

AI-based systems must be trustworthy to be used in safety-critical areas. This is where Fraunhofer IKS research comes in, developing methods to make AI safer and more understandable.


Resilient Software

Resilient software systems must be controllable and adaptable. Fraunhofer IKS develops solutions that ensure the usability and safety of cyber-physical systems even in dynamic environments.


Use cases and references to the research areas of Fraunhofer IKS can be found in our reference overview. Use the links below to jump directly to the area you are most interested in: