Dependable Environment Perception

Validation of environment models

Proper monitoring of the environment is essential for the safe operation of cognitive systems. Autonomous vehicles, for example, continuously measure their surroundings with various sensor technologies such as camera, radar and lidar systems. Using this data as a basis, and with the help of sensor fusion algorithms and artificial intelligence (AI), an environment model is created that in turn serves as the foundation for all decision-making.

Since the individual sensors have situation-dependent vulnerabilities however, there must be way to guarantee safe operation of a cognitive system despite these weak points. With automated driving systems for instance, various factors can impact the quality of the environment monitoring such as weather and road conditions and specific driving situations.

The Fraunhofer Institute for Cognitive Systems IKS researches and develops systematic safety analysis methods that make it possible to reliably predict the full extent of the risks and optimize the safety, performance, dependability and cost of the system design. The tools developed from these activities can be utilized to satisfy the requirements of the »ISO 21448« standard, among other things.

Steine unter Wasser
© iStock.com/Helmagh

Robust artificial intelligence in autonomous systems

AI and machine learning methods – deep neural networks (DNN) for instance – are clearly superior to conventional algorithms in many applications such as image and audio recognition or sensor data processing. But even if machine learning often functions well, when it comes to safety-critical applications such as autonomous driving, human lives are at stake. AI applications play an especially important role in the creation of environment models since erroneous information can lead directly to dangerous decisions. At the moment the biggest challenge in utilizing artificial intelligence involves achieving this high benchmark in quality.

Artificial intelligence must be robust and verifiable. Robust means that minor changes to the input data will result only in minor changes to the output. With the verification of AI-based algorithms, the system therefore checks whether the AI method is adhering to certain specifications such as robustness.

Conventional software quality assurance approaches have limited utility for AI methods however. Although pure code-based tests examine the correct implementation of a deep neural network, they overlook the fact that the system’s behavior stems from the trained parameters and input data, not from the programming.

A further problem is the mass of possible test scenarios. In a real situation the system – an autonomous vehicle for instance – is confronted with so many potential combinations that it’s unable to completely cover all of them. Furthermore, with artificial intelligence, you cannot assume the system will function properly after successful testing, even in slightly divergent real scenarios. Thus a key question is, »When is the artificial intelligence safe enough?«

To answer this question in the future, Fraunhofer IKS is researching new test coverage measures, from which new testing criteria can be derived. The goal is AI test methods that thoroughly cover the data space and identify critical scenarios.

The institute is also developing new quality measures that satisfy the requirements of safety verifications. To do that researchers are evaluating and adapting measures stemming from large-scale research projects, such as the VDA flagship initiatives. Formal examination methods are also being examined in order to guarantee adherence to the specification. Since a pure post-hoc analysis is insufficient in this case, certifiable artificial intelligence learning methods will be developed, which take into account and adhere to the specification, even during the training phase.

© iStock.com/assalve