Trustworthy AI

Artificial Intelligence (AI) is at the heart of Cognitive Systems. The development of trustworthy cognitive systems therefore starts with trustworthy AI. Recent developments such as the European Union's AI Regulation show the increasing importance of developing strong AI solutions that society can trust. The Fraunhofer Institute for Cognitive Systems IKS focuses its research on aspects that are particularly relevant for cyber-physical systems. Therefore, we research approaches that have a direct impact on the reliability and safety of artificial intelligence for various application areas. These include autonomous driving, production, smart farming, medicine, and quantum computing.

Safe AI based image recognition

One focus of our research in the area of trustworthy artificial intelligence is on reliable AI-based perception and image recognition, in particular on the interpretation of 2D and 3D data using artificial intelligence. Such approaches are used, for example, for environmental perception in autonomous vehicles and for image interpretation in medical devices. To improve the reliability of perception and image recognition systems, Fraunhofer IKS is investigating approaches for AI monitoring, which records the results of machine learning algorithms during runtime and evaluates their quality.

In order to assess the quality of machine learning algorithms, we also conduct research into the training of artificial intelligence. Here, we focus on data-efficient learning approaches, since most industrial applications and companies do not have sufficient data to train artificial intelligence in the required quality. We consider the use of active-in-the-loop learning, hybrid ML models, and synthetic data.

Predictive modeling: How AI can reliably support decisions

As part of its work on trustworthy artificial intelligence, Fraunhofer IKS is not only researching reliable AI-based predictions, but also methods for optimal decision making. This is because classical AI-based prediction models are not automatically suitable for generating reliable decision recommendations - instead, causal informed predictive modeling has to be integrated in order to be able to make well-founded decisions.

Using time series analysis, we evaluate series of past data to make reliable AI-based predictions for future developments. The application areas of such approaches are manifold: They are used, for example, to support medical diagnosis and predictive maintenance of production facilities, machines and devices. We develop methods for robust, reliable and early prediction of future events. Such events can be the failure of individual components in machines or the likelihood of a physical illness. In addition, we integrate methods into our AI solutions that provide defensible and actionable recommendations for decision-making through causal inference.

Trustworthy AI from an interdisciplinary perspective

Fraunhofer IKS takes an interdisciplinary view of trustworthy artificial intelligence and incorporates results from the other two research areas "Safety Assurance" and "Resilient Software Systems" into the development of approaches and methods. For example, Safety Assurance develops safety cases to prove that concrete requirements for the safety of systems are met.

Further research topics

 

Safety Assurance

Fraunhofer IKS is researching the requirements that an AI must fulfill in order to be safe enough. We are also working on safety cases to prove the safety of the overall system.

 

Resilient Software

Resilient software systems must be controllable and adaptable. Fraunhofer IKS develops solutions that ensure the usability and safety of cyber-physical systems even in dynamic environments.

References

Use cases and references to the research areas of Fraunhofer IKS can be found in our reference overview. Use the links below to jump directly to the area you are most interested in: