Safety Architectures for AI Systems

Artificial intelligence must be validated

© iStock.com/beerphotographer

AI-based autonomous systems are susceptible to sporadic errors. But such errors, triggered by inclement weather or unlearned situations as an example, can have serious consequences. With this in mind, the Fraunhofer Institute for Cognitive Systems IKS researches and develops methods for the automated validation of artificial intelligence (AI) technologies and autonomous systems. The goal is to be able to switch to a safe state at any time, without interrupting or deactivating the system, despite the occurrence of expected or unexpected events.

AI in industrial and autonomous driving environments

Artificial intelligence and autonomous systems are used in fields of applications such as manufacturing, logistics and material inspections, or for perception monitoring in autonomous vehicle systems. In these environments especially, safety-critical situations or costly outages must be avoided – in other words, these systems must be adequately validated. The Fraunhofer Institute for Cognitive Systems IKS therefore develops special safety architectures that validate the overall systems in which the AI technology is deployed.

Fraunhofer IKS develops comprehensive AI-based safety architectures

Fraunhofer IKS offers various building blocks for creating a comprehensive AI safety architecture that makes it possible to certify the AI application.

These building blocks include:

  • the intelligent cross-validation of existing internal and external sensors for a dependable environment model
  • an assistant for the systematic safety analyses of AI applications
  • automated AI monitoring for adherence to safety requirements
  • modular safety verifications for good lifecycle management of AI-based soluations
  • provisioning of safety functions through familiar DevOps frameworks
  • adaptive safety management for flexible management of risks based on the local conditions and the actual state of the system

Your benefits: comprehensive safety concept for your AI solution

With the methods and tools of Fraunhofer IKS you can

  • assess the risk of your AI solution
  • automatically generate a monitoring system
  • increase the leeway in managing risk
  • ensure dependable behavior of your autonomous systems in any situation
  • ensure a dependable lifecycle management
  • verify the dependability of your AI systems
  • evalute danger situations in a differentiated manner and assess the dependability of the AI based on the situation
  • quickly adapt software and AI states and achieve extremely short certification intervals