Safety Assurance: Stringent safety proofs for AI

Cyber-physical cognitive systems are constantly evolving and penetrating more and more areas of life. Increased autonomy, the use of artificial intelligence, and networking to form "open systems of systems" make these systems increasingly complex. This also makes it more difficult to prove that they are safe, i.e. free of unacceptable risks.

For this reason, the Fraunhofer Institute for Cognitive Systems IKS is researching new definitions of acceptable risk assessment and convincing safety assurance for such often AI-based cognitive systems.

What is Safety Assurance? When is a system safe enough?

Safety assurance refers to the process of comprehensively proving the safety of a system. The first question to be asked is: When is a system or artificial intelligence safe enough? What requirements must the system fulfill and how can the fulfillment of these requirements be demonstrated? A safety assurance case is then the structured, continuous argumentation chain for proving safety.

Safety Assurance Case: How does the safety of a system become verifiable?

A safety assurance case can be used to prove the safety of a system. The safety assurance cases researched and developed by Fraunhofer IKS are based on a consistent and stringent argumentation supported by analytical procedures to prove their validity. Safety assurance cases formalize the safety-relevant properties of the system and include the uncertainty assessment of the environment as well as the technical components and interactions of a system. This systematic approach of the safety assurance case can ensure that the system meets the predefined requirements and thus has a low risk of failure.

In addition to developing safety assurance cases, Fraunhofer IKS extends existing safety analysis and fault modeling techniques. For this purpose, the institute also includes causal factors in the risk assessment, which result from the increasing complexity and uncertainty of the system. On this basis, Fraunhofer IKS defines measures for risk minimization and evaluates their effectiveness.

These systematic approaches to the verification of safety-relevant properties of cognitive systems are used both at the level of the overall system and for the safety verification of individual AI- and ML-based functions.

Safety Assurance from an interdisciplinary perspective

Fraunhofer IKS takes an interdisciplinary approach to safety assurance and incorporates results from the two other main research areas "Trustworthy AI" and "Resilient Software Systems" into the development of safety assurance approaches.

Safety Assurance on our Safe Intelligence Blog

On our blog you will find articles on Safety Assurance and other research topics at Fraunhofer IKS. Read on directly:

 

Artificial Intelligence / 27.3.2023

How constraint-aware models can make industrial operations safe and efficient

Achieving safety with AI systems requires a comprehensive and collaborative approach, including technical, ethical, and regulatory considerations. A central aspect is designing AI systems that comply with safety requirements defined in constraints.

 

Safety Engineering / 14.2.2023

New approach to managing the safety of automated driving systems

Two leading experts from the Fraunhofer IKS and the University of York in the safety of complex systems have published a paper proposing a new approach to assure the safety of cognitive cyber-physical systems (CPS), including automated driving systems.

 

Artificial intelligence / 3.11.2022

Where errors can creep in

If uncertainties in an autonomous system are not dealt with, they impair the system’s functionality and can lead to safety risks. We will look at specific sources of uncertainty that have been noted in the relevant literature.

Further research topics

 

Trustworthy AI

AI-based systems must be trustworthy to be used in safety-critical areas. This is where Fraunhofer IKS research comes in, developing methods to make AI safer and more understandable.

 

Resilient Software

Resilient software systems must be controllable and adaptable. Fraunhofer IKS develops solutions that ensure the usability and safety of cyber-physical systems even in dynamic environments.

References

Use cases and references to the research areas of Fraunhofer IKS can be found in our reference overview. Use the links below to jump directly to the area you are most interested in: