Validation of autonomous systems
Through its participation in the ADA Lovelace Center, the Fraunhofer Institute for Cognitive Systems IKS is conducting research into the validation of autonomous systems, such as autonomous driving and Industry 4.0. To ensure basic functionality in the first place, autonomous systems require machine learning processes that can monitor and process complex and unknown situations. When systems with higher automation classifications (level 3 or higher) are involved, so-called deep learning approaches are often used, which employ neural networks for context recognition. This allows autonomous vehicles to carry out tasks such as recognizing objects, interpreting traffic situations and conveying driving instructions.
Safe AI: new methods for validating AI processes
Neural networks cannot be validated, however. Even when deep learning approaches are employed, there is no way to externally comprehend why a neural network makes specific decisions. For this reason, at present these processes cannot be employed in safety-critical systems such as autonomous driving or driverless industrial transport systems without safeguards.
With this in mind, Fraunhofer IKS is conducting research into new methods that will provide the means to adequately validate AI processes for use in high-performance embedded systems. The institute is also developing mechanisms that verifiably monitor all of the characteristics required for the safety of autonomous systems.
Our researchers are furthermore working on a flexible validation concept that can be applied to various learning processes and different types of sensors. That means for instance that information created through machine learning processes and which is partially comprehensible, such as explainable AI, can flow into the validation.
Fraunhofer IKS is also examining suitable AI methods for the monitoring process itself, which can be used as redundant paths or for monitoring incomprehensible machine learning processing.