Robust AI: Uncertainty Estimation

Fraunhofer IKS brings together safety and Artificial Intelligence (AI): Robuscope, a new application from the Fraunhofer Institute for Cognitive Systems IKS assesses the reliability of artificial intelligence at the push of a button.

Robuscope allows industrial companies, universities and other scientific institutions to test the robustness of the results of their AI model. The new platform uses metrics for uncertainty and robustness quantification to assess the reliability of the analyses of the tested AI models. The online tool of Fraunhofer IKS provides detailed information on how the tested AI models can be optimized and thus be made safer.

Contact us now

Challenge: Testing reliabilty and robustness of AI

Specifically, Robuscope provides answers to the following questions:

  • How reliable is the self-assessment of the artificial intelligence?
  • How robust is the model?
  • What is the quality of its predictions?

And what’s more: the tool does not need sensitive data to make reliable statements: No sensitive or confidential data, such as the AI model or real data, needs to be uploaded to use the tool. The online tool of Fraunhofer IKS provides detailed information on how the tested AI models can be optimized and thus be made safer.

Solution: Safety Analysis of AI models

Robuscope is particularly suitable for performing uncertainty analyses on AI models in safety-critical applications. Examples include:

  • Medical technology
  • Logistics
  • Autonomous driving

The first version of the application focuses on computer vision, in particular image recognition (perception) and general classification.

The new platform tells you when you can trust your Artificial Intelligence – and when you cannot. Because once you can identify the point at which your AI fails, you can optimize the model and confidently deploy the AI in safety-critical applications.


The results of Fraunhofer IKS analysis provide you with key information on how to improve your Artificial Intelligence such as your neural network so that it can also be used in safety-critical contexts. You find out:

  • How reliable your model's predictions are with respect to specific robustness metrics,
  • How to interpret the results of the robustness analysis, and
  • You will also be given general recommendations on how to develop your AI model.

Our other core competencies

In addition to Robust AI: Uncertainty Estimation, we also focus on the following topics:

Person and Object Detection

​FAST -Feedback-guided Automation of Sub-tasks


Industrial Sensors

The automation of production requires reliable systems for the real-time monitoring and control of processes. Visit Industrial Sensors to learn more about our focus areas and the various projects Fraunhofer IKS is working on.

​Modular Concept Learning

Standards for AI, Safety and Automation