Virtual Event  /  January 01, 2021  -  January 31, 2021

AI Safety Workshop: Adrian Schwaiger presents his new paper on deep learning

Adrian Schwaiger of the Fraunhofer Institute for Cognitive Systems IKS will be presenting his new paper at the AI Safety Workshop in January 2021. The workshops seeks to explore new ideas on safety engineering and puts the focus on strategic, ethical and policy aspects of safety-critical AI-based systems .Due to the current situation, it has not yet been finally clarified whether the workshop will take place virtually or in Japan.

In their paper »Is Uncertainty Quantification in Deep Learning Sufficient for Out-of-Distribution Detection?« the IKS-researchers compare state-of-the-art uncertainty quantification methods for deep neural networks with respect to their ability to detect new inputs.

 

Talk at AI Safety Workshop
Adrian Schwaiger
»Is Uncertainty Quantification in Deep Learning Sufficient for Out-of-Distribution Detection?«
Japan / virtual