Adaptive safety and performance management

Dealing with uncertainties: Dynamic Safety Management

© iStock.com/Rike_

Cognitive systems are deployed in dynamic contexts. This leads to many uncertainties that make it difficult to predict the system context and validate the system accordingly. Even functions based on machine learning are sources of uncertainties since their behavior cannot be accurately predicted.

Conventional validation approaches have always assumed the worst case situation in these dynamic systems, leading to complicated, costly or ineffective system designs. However, dynamic safety and performance management make it possible to overcome these current barriers and easily, cost-effectively and efficiently validate cognitive systems, even in dynamic contexts.

Fraunhofer IKS enables the establishment of new cognitive technologies

The Fraunhofer Institute for Cognitive Systems IKS develops solutions designed for safety-critical systems that are often based on AI technology. In contrast to conventional methods, the Fraunhofer IKS approach specifically addresses the three interconnected issues of safety, reliability and cost. This permits the creation of new and innovative market-ready products for a wide range of industries, which otherwise would be unacceptable to society without the corresponding safety solutions.

More specifically, this involves adapting and enhancing proven software engineering methods, which includes processes such as determining and analyzing risks, carrying out error analyses, deriving safety concepts and generating safety verifications. The high degree of automation in model-based system development makes it possible to move partial aspects, such as the dynamic risk analysis for different operational design domains (ODDs), to the runtime environment. To do that researchers are developing models that a system can use to identify and consciously determine, or verifiably optimize, its own state with respect to safety, reliability and availability at runtime. The system is able to interpret the models on its own and derive corresponding measures to safeguard its resilience. Using risk models as a foundation, the system can also evaluate the risk for a specific situation. With these situation- and context-dependent safety management solutions, the desired cognitive system performance can be achieved while guaranteeing safety at the same time.  

Use and explainability of artificial intelligence (AI)

To make it possible to utilize artificial intelligence (AI) in dynamic systems, the Fraunhofer Institute for Cognitive Systems IKS is also working toward making AI methods explainable. AI is currently a black box technology, which means the decisions it makes are not transparent enough for humans to comprehend. This is an important requirement however, especially for use in safety-critical applications. While AI research activities are focused on solving the general aspects, Fraunhofer IKS is conducting research into explainable AI approaches for managing and controlling autonomous systems. Among other tools, researchers are relying on gray and white box approaches, as well as a combination of different AI methods and conventional algorithms, to represent the results in an interpretable fashion.

The goal is on the one hand to develop post-hoc analysis processes that make an existing model interpretable. Secondly, directly interpretable AI methods will be developed that on their own supply explanations regarding how predictions were made. The focus here is on methods that support and improve the system safety analyses through the ability to explain the errors and interpret the deviations.