Safety Architectures for AI Systems

Artificial intelligence must be validated

AI-based autonomous systems are susceptible to sporadic errors. But such errors, triggered by inclement weather or unlearned situations as an example, can have serious consequences. With this in mind, the Fraunhofer Institute for Cognitive Systems IKS researches and develops methods for the automated validation of artificial intelligence (AI) technologies and autonomous systems. The goal is to be able to switch to a safe state at any time, without interrupting or deactivating the system, despite the occurrence of expected or unexpected events.

AI in industrial and autonomous driving environments

Artificial intelligence and autonomous systems are used in fields of applications such as manufacturing, logistics and material inspections, or for perception monitoring in autonomous vehicle systems. In these environments especially, safety-critical situations or costly outages must be avoided – in other words, these systems must be adequately validated. The Fraunhofer Institute for Cognitive Systems IKS therefore develops special safety architectures that validate the overall systems in which the AI technology is deployed.

Fraunhofer IKS develops comprehensive AI-based safety architectures

© iStock.com/beerphotographer

Fraunhofer IKS offers various building blocks for creating a comprehensive AI safety architecture that makes it possible to certify the AI application.

These building blocks include:

  • the intelligent cross-validation of existing internal and external sensors for a dependable environment model
  • an assistant for the systematic safety analyses of AI applications
  • automated AI monitoring for adherence to safety requirements

Modular safety verifications for good lifecycle management

A further safety factor is the lifecycle management of AI-based applications, so that software and AI states can be quickly adapted and extremely short certification intervals can be achieved. With our methods and tools, we support the generation of modular safety verifications and enable the provisioning of safety functions through familiar DevOps frameworks.

Adaptive safety management

The ability to adapt to the environment and the current states is an increasingly important feature of autonomous systems. Through our dynamic adaptive safety management approaches, we enable flexible management of risks based on the local conditions and the actual state of the system. Danger situations can thus be evaluated in a differentiated manner and the dependability of the AI can be assessed based on the situation.