Testing AI models for robustness online

Fraunhofer IKS brings together safety and artificial intelligence (AI): A new application from the Fraunhofer Institute for Cognitive Systems IKS assesses the reliability of artificial intelligence at the push of a button.

This allows industrial companies, universities and other scientific institutions to test the robustness of the results of their AI model. The new platform uses metrics for uncertainty and robustness quantification to assess the reliability of the analyses of the tested AI models. The online tool of Fraunhofer IKS provides detailed information on how the tested AI models can be optimized and thus be made safer.

Specifically, the tool provides answers to the following questions:

  • How reliable is the self-assessment of the artificial intelligence?
  • How robust is the model?
  • What is the quality of its predictions?

And what’s more: the tool does not need sensitive data to make reliable statements

No sensitive or confidential data, such as the AI model or real data, needs to be uploaded to use the tool. The online tool of Fraunhofer IKS provides detailed information on how the tested AI models can be optimized and thus be made safer.

Test your AI model

Optimization for safety-critical AI applications in computer vision

The tool is particularly suitable for performing uncertainty analyses on AI models in safety-critical applications. Examples include:

  • Medical technology
  • Logistics
  • Autonomous driving

The first version of the application focuses on computer vision, in particular image recognition (perception) and classification. Plans are in place to extend the tool to include regression analysis.

Your benefits: Safe, explainable and robust artificial intelligence

The new Fraunhofer IKS platform tells you when you can trust your artificial intelligence – and when you cannot. Because once you can identify the point at which your AI fails, you can optimize the model and confidently deploy the AI in safety-critical applications.

Results with practical benefits

The results of Fraunhofer IKS analysis provide you with key information on how to improve your artificial intelligence or neural network so that it can also be used in safety-critical contexts. You find out

  • how reliable your model's predictions are with respect to specific robustness metrics,
  • which methods can make the results of the algorithm more reliable
  • and how to improve the training of your artificial intelligence.
  • You will also be given general recommendations on how to develop your AI model.

Example: Safe AI in medical diagnostics

Artificial intelligence is becoming increasingly important in medical diagnostics, in particular for the analysis of medical image data such as CT or MRI scans. Incorrect diagnoses or undetected health issues have serious consequences for patients. The results of an AI model must therefore always be reliable and transparent.

In cases when the results are unclear, in particular, we want to know: How does the AI system deal with that? Fraunhofer IKS wants to use the degree of uncertainty of predictions to make AI models safer. The system must only make a definitive diagnosis if it is certain it is the right one. If there is too much uncertainty, the system must communicate this to the medical staff so that the data can be reviewed manually. This is where the online application of Fraunhofer IKS comes in; it checks how robust and safe the predictions of an AI system are.

Arzt bei der Betrachtung von CT-Bildern einer Wirbelsäule
© iStock.com/megaflopp

This is how the platform works

The Fraunhofer IKS platform allows you to test your AI models via a straightforward user interface.

Step 1: Creating a file

To check your result data set and thus the underlying AI or neural network for its robustness, you first need a file (.json or .xlsx) that contains these data. While there is no minimum number of required data points, the more data points your file contains, the better the result of the analysis. We recommend at least 100 data points.

Please note: You do not need to provide any sensitive data, such as your AI algorithm, for the analysis, only a result data set. Instead of real data, this data set can also contain sample data or anonymized data.

Step 2: Uploading a file

Upload your file to the Fraunhofer IKS application via the relevant field.

Step 3: Analysis and assessment

The Fraunhofer IKS tool now analyzes your data. The application determines how reliable your AI results are by analyzing the AI algorithms using safety-related metrics. Based on this, you will be given advice on which common methods of uncertainty quantification you can use to improve the results, which in turn gives you a more reliable decision-making basis for your AI.

You receive the evaluation of the following five methods as standard:

  • Confusion
  • Calibration
  • Uncertainty Ratios
  • Remaining Error
  • Prediction Rejection Curve

You can download and save the results of the analysis as a PDF file.

Mailing list

Thank you for interest in the Fraunhofer IKS online tool.

We have just sent you a confirmation email. If you do not receive an email in the next few minutes, please check your spam folder or send us an email to safe-intelligence@iks.fraunhofer.de.

* Required

An error has occurred. Please try again or contact us by e-mail: safe-intelligence@iks.fraunhofer.de

Data Protection Policy

This work was funded by the Bavarian Ministry for Economic Affairs, Regional Development and Energy as part of a project to support the thematic development of the Institute for Cognitive Systems.