Robuscope: Testing AI models for robustness online

Fraunhofer IKS brings together safety and artificial intelligence (AI): Robuscope, a new application from the Fraunhofer Institute for Cognitive Systems IKS assesses the reliability of artificial intelligence at the push of a button.

Robuscope allows industrial companies, universities and other scientific institutions to test the robustness of the results of their AI model. The new platform uses metrics for uncertainty and robustness quantification to assess the reliability of the analyses of the tested AI models. The online tool of Fraunhofer IKS provides detailed information on how the tested AI models can be optimized and thus be made safer.

Specifically, Robuscope provides answers to the following questions:

  • How reliable is the self-assessment of the artificial intelligence?
  • How robust is the model?
  • What is the quality of its predictions?

And what’s more: the tool does not need sensitive data to make reliable statements

No sensitive or confidential data, such as the AI model or real data, needs to be uploaded to use the tool. The online tool of Fraunhofer IKS provides detailed information on how the tested AI models can be optimized and thus be made safer.

Test your AI model

Optimization for safety-critical AI applications in computer vision

Robuscope is particularly suitable for performing uncertainty analyses on AI models in safety-critical applications. Examples include:

  • Medical technology
  • Logistics
  • Autonomous driving

The first version of the application focuses on computer vision, in particular image recognition (perception) and general classification.

Your benefits: Safe, explainable and robust artificial intelligence

Robuscope, the new Fraunhofer IKS platform tells you when you can trust your artificial intelligence – and when you cannot. Because once you can identify the point at which your AI fails, you can optimize the model and confidently deploy the AI in safety-critical applications.

Results with practical benefits

The results of Fraunhofer IKS analysis provide you with key information on how to improve your artificial intelligence such as your neural network so that it can also be used in safety-critical contexts. You find out

  • how reliable your model's predictions are with respect to specific robustness metrics,
  • how to interpret the results of the robustness analysis, and
  • you will also be given general recommendations on how to develop your AI model.

Example: Safe AI in medical diagnostics

Artificial intelligence is becoming increasingly important in medical diagnostics, in particular for the analysis of medical image data such as CT or MRI scans. Incorrect diagnoses or undetected health issues have serious consequences for patients. The results of an AI model must therefore always be reliable and transparent.

In cases when the results are unclear, in particular, we want to know: How does the AI system deal with that? Fraunhofer IKS wants to use the degree of uncertainty of predictions to make AI models safer. The system must only make a definitive diagnosis if it is certain it is the right one. If there is too much uncertainty, the system must communicate this to the medical staff so that the data can be reviewed manually. This is where Robuscope, the online application of Fraunhofer IKS, comes in; it checks how robust and safe the predictions of an AI system are.

Arzt bei der Betrachtung von CT-Bildern einer Wirbelsäule

This is how Robuscope works

Robuscope allows you to test your AI models via a straightforward user interface.

Step 1: Creating a file

To check your result data set and thus the underlying AI, for example your neural network, for its robustness, you first need a file (.json) that contains these data. While there is no minimum number of required data points, the more data points your file contains and the more representative the data is, the better the result of the analysis. We recommend at least 30 data points per class/category in the dataset.

Please note: You do not need to provide any sensitive data, such as your AI algorithm, for the analysis, only a result data set. Instead of real data, this data set can also contain sample data or anonymized data.

Step 2: Uploading a file

Upload your file to the Robuscope website via the relevant field.

Step 3: Analysis and assessment

The Fraunhofer IKS tool now analyzes your data. Robuscope determines how reliable your AI results are by analyzing the AI algorithms using safety-related metrics. Based on this, you will be given advice on which common methods of uncertainty quantification you can use to improve the results, which in turn gives you a more reliable decision-making basis for your AI.

You can download and save the results of the analysis as a standalone and interactive HTML file.

This work was funded by the Bavarian Ministry for Economic Affairs, Regional Development and Energy as part of a project to support the thematic development of the Institute for Cognitive Systems.