Dependable Artificial Intelligence (AI)

Artificial intelligence cannot be validated with conventional methods

© iStock.com/chinaface

Artificial intelligence (AI) quality depends to a large extent on the quality of the training data. The better the training data, the better the AI. Even if an extensive set of training data is available, you cannot assume that every critical situation will be covered.

Furthermore, minor changes in the environment such as a dirty sensor or unfavorable weather conditions can heavily and unpredictably influence AI-based decision-making. This leads to an infinite number of possible situation-based input values in the neural network. Because minor variations in the input values can lead to different classifications, predicting the behavior becomes an impossible task. To date, the only way to determine exactly what the neural network learned, has been complex, time-consuming observation.

This leads to a situation in which the decision quality of the AI technology cannot be verified with formal methods. Currently-available approaches for self-estimation of the dependability of an AI system are still not technically mature. Furthermore, artificial intelligence cannot be validated using conventional methods.

Artificial intelligence will change many industries

In order to utilize artificial intelligence in manufacturing, logistics and material inspections, or for perception monitoring in autonomous vehicles, it’s important to have verifiable safety targets. In these cases even minor errors can cause costly downtimes or life-threatening situations. So that AI can also be deployed in these fields of application, the Fraunhofer Institute for Cognitive Systems IKS develops new methods for improving neural network explainability, transparency and robustness.

 

Fraunhofer IKS makes AI technology trustworthy and dependable

Fraunhofer IKS offers you important mechanisms for verifying the quality of your AI solutions, including:

  • Methods and quality metrics for determining the trustworthiness or probability of error of a neural network
  • Monitors for validating AI runtime characteristics
  • Components for the automated validation of processing chains, such as the perception chain  in autonomous vehicles
  • Inspection criteria for AI technology safety analyses
In addition to contract development, we offer you various other options for collaborating with Fraunhofer IKS, such as joint innovation teams, studies and potential analyses. Here you will find an overview of our cooperation models.