Safety Assurance: Stringent safety proofs for AI

Cyber-physical cognitive systems are constantly evolving and penetrating more and more areas of life. Increased autonomy, the use of artificial intelligence, and networking to form "open systems of systems" make these systems increasingly complex. This also makes it more difficult to prove that they are safe, i.e. free of unacceptable risks.

For this reason, the Fraunhofer Institute for Cognitive Systems IKS is researching new definitions of acceptable risk assessment and convincing safety assurance for such often AI-based cognitive systems.

What is safety assurance? When is a system safe enough?

Safety assurance refers to the process of comprehensively proving the safety of a system. The first question to be asked is: When is a system or artificial intelligence safe enough? What requirements must the system fulfill and how can the fulfillment of these requirements be demonstrated? A safety assurance case is then the structured, continuous argumentation chain for proving safety.

Safety assurance case: How does the safety of a system become verifiable?

A safety assurance case can be used to prove the safety of a system. The safety assurance cases researched and developed by Fraunhofer IKS are based on a consistent and stringent argumentation supported by analytical procedures to prove their validity. Safety assurance cases formalize the safety-relevant properties of the system and include the uncertainty assessment of the environment as well as the technical components and interactions of a system. This systematic approach of the safety assurance case can ensure that the system meets the predefined requirements and thus has a low risk of failure.

In addition to developing safety assurance cases, Fraunhofer IKS extends existing safety analysis and fault modeling techniques. For this purpose, the institute also includes causal factors in the risk assessment, which result from the increasing complexity and uncertainty of the system. On this basis, Fraunhofer IKS defines measures for risk minimization and evaluates their effectiveness.

These systematic approaches to the verification of safety-relevant properties of cognitive systems are used both at the level of the overall system and for the safety verification of individual AI- and ML-based functions.

SAFE AI Framework systematically manages uncertainties and supports continuous safety assurance

Introducing the cutting-edge SAFE AI methodology, a framework crafted to meet the highest standards of safety and reliability in AI systems. Drawing upon internationally recognized guidelines such as ISO 26262, ISO 21448, and ISO PAS 8800, as well as the stringent requirements outlined in the EU AI Act for high-risk systems, this methodology sets a new benchmark for AI safety protocols.

At its core, SAFE AI prioritizes the traceability of causal relationships between any functional shortcomings within AI systems and the corresponding mitigation measures, aligning seamlessly with ISO 21448 and ISO PAS 8800 standards. By systematically managing uncertainties inherent in AI development, including processes, tools, and design choices, SAFE AI instills confidence in the system's performance and reliability.

One of its standout features lies in its evaluation of confidence levels in assurance arguments, bolstered by a comprehensive analysis of evidence strength, ensuring that every safety claim is backed by indisputable support. Moreover, SAFE AI seamlessly integrates with DevOps practices, facilitating continuous safety assurance as required by ISO PAS 8800.

In addition to its technical prowess, SAFE AI also supports stakeholders across the AI value chain, offering invaluable assistance in requirement elicitation, validation, and verification of safety-relevant quality attributes. Utilizing a contract-based approach, it ensures that the diverse needs of stakeholders are met, further cementing its position as a cornerstone of AI safety in today's rapidly evolving landscape.

Safety assurance from an interdisciplinary perspective

Fraunhofer IKS takes an interdisciplinary approach to safety assurance and incorporates results from the two other main research areas "Trustworthy AI" and "Resilient Software Systems" into the development of safety assurance approaches.

Safety assurance on our Safe Intelligence Blog

On our blog you will find articles on safety assurance and other research topics at Fraunhofer IKS. Read on directly:

 

Artificial Intelligence / 27.3.2023

How constraint-aware models can make industrial operations safe and efficient

Achieving safety with AI systems requires a comprehensive and collaborative approach, including technical, ethical, and regulatory considerations. A central aspect is designing AI systems that comply with safety requirements defined in constraints.

 

Safety Engineering / 14.2.2023

New approach to managing the safety of automated driving systems

Two leading experts from the Fraunhofer IKS and the University of York in the safety of complex systems have published a paper proposing a new approach to assure the safety of cognitive cyber-physical systems (CPS), including automated driving systems.

 

Artificial intelligence / 3.11.2022

Where errors can creep in

If uncertainties in an autonomous system are not dealt with, they impair the system’s functionality and can lead to safety risks. We will look at specific sources of uncertainty that have been noted in the relevant literature.

Further research topics

 

Trustworthy AI

AI-based systems must be trustworthy to be used in safety-critical areas. This is where Fraunhofer IKS research comes in, developing methods to make AI safer and more understandable.

 

Resilient Software

Resilient software systems must be controllable and adaptable. Fraunhofer IKS develops solutions that ensure the usability and safety of cyber-physical systems even in dynamic environments.

References

Use cases and references to the research areas of Fraunhofer IKS can be found in our reference overview. Use the links below to jump directly to the area you are most interested in: