Adrian Schwaiger, research engineer at the Fraunhofer Institute for Cognitive Systems IKS, will be presenting his new paper »Benchmarking Uncertainty Estimation Methods for Deep Learning With Safety-Related Metrics« at the SafeAI workshop. The workshop belongs to the AAAI-20 Conference in New York City. In this paper, Adrian Schwaiger and his colleagues Maximilian Henne, Karsten Roscher and Gereon Weiss compare current models of measuring uncertainty in deep neural networks in order to optimally combine the safety and performance of cognitive systems.
Making uncertainties of AI measurable
Deep neural networks generally perform very well in giving accurate predictions, but they often fail in recognizing when these predictions may be wrong. This absence of awareness regarding the reliability of given outputs is a big obstacle in deploying such models in safety-critical applications for example in autonomous vehicles or in medical technology.
To use artificial intelligence in these areas, AI must recognize how reliable its predictions are. Uncertainty must become measurable.
Therefore, the authors evaluate and compare several state-of-the-art methods for estimating uncertainty for image classification with respect to safety-related requirements and metrics that are suitable to describe the models' performance in safety-critical domains.
Presentation at the SafeAI Workshop during the AAAI-20 Conference
»Benchmarking Uncertainty Estimation Methods for Deep Learning With Safety-Related Metrics«
New York City, February 7, 2020, 4:00 p.m. - 5:20 p.m.