How will autonomous driving be made a reality? The Safe AI Engineering project addresses this question by providing the foundations for generally accepted and practical safety certification for AI in the market.

AI engineering as an enabler for a well-founded safety argumentation throughout the entire life cycle of an AI function
The Safe AI Engineering project investigates how artificial intelligence (AI) can be safely and traceably integrated into safety-critical systems, especially automated vehicles.
The aim is to develop methods and safety verifications which ensure both the correct operation and continuous monitoring of AI-based functions. This is intended to enable official approvals and encourage public acceptance.
The use of AI in safety-critical systems is complicated by several challenges. These include:
In addition to this, changing environmental conditions, complex and, to date, insufficiently investigated interrelationships as well as the continuous development of AI render the provision of safety verifications difficult.
The Safe AI Engineering project takes an iterative, hands-on approach to meeting these challenges: on the basis of a specific AI perception function for pedestrian detection, the Safe AI Engineering methodology is developed and evaluated in three use cases of increasing complexity. In doing so, both real and synthetic data are used, quality metrics are assessed in accordance with relevant safety standards, and new methods for the data processing, monitoring and explainability of AI systems are applied.
The aim of the project is obtaining a safety argumentation of AI functions for autonomous driving throughout the entire lifecycle. This includes the planning, development, testing, deployment, monitoring and embedding in the overall system.
Test methods and standards (ISO 26262, SOTIF, ISO/PAS 8800) are also integrated. This enables the Safe AI Engineering project to close the gap between verification and validation as well as safety certifications for AI.
In this project, Fraunhofer IKS is responsible for co-leading the project package for the formal underpinning of the safety argumentation.
The focus is on deriving technical safety requirements and acceptance criteria from standards, as well as analysing complex cause-and-effect relationships and their impact on uncertainties during the safety assessment of AI-based systems. To this end, an uncertainty-driven approach for arguments is developed which formulates the safety requirements as design contracts and demonstrates their fulfilment at the component and functional levels.
Fraunhofer IKS is also developing a formalised approach which can clearly demonstrate the integrity, validity and confidence of the safety argumentation and quantify the degree of contractual fulfilment. This enables a transparent and traceable assessment of AI safety.
To enable a smooth transition of the concepts to the existing hardware platforms, Fraunhofer IKS is also focussing on the integration and validation of the AI system created as part of the project. This will, for example, enable an early validation of the developed concepts on a virtual platform.
Fraunhofer IKS further contributes methods for continuous safety engineering and safety with its APIKS platform for the runtime.
APIKS enables a rapid prototyping and system-level validation by linking a modular ROS2-based stack with simulation tools such as CARLA. This supports the iterative development, integration and continuous monitoring of AI-based perception functions under realistic conditions and helps to underpin safety claims. This creates the trust necessary for the approval and the public acceptance prior to the integration into actual hardware.
Through its structured approach, the Safe AI Engineering project creates solid building blocks for a safety certification of AI functions in vehicles. The Safe AI Engineering methodology developed is intended to serve as a basis for a uniform, traceable proof for the safety of AI in automated vehicles and thus contribute to regulatory approval. Realistic demonstrators (including a virtual simulation environment) will also be developed and measurable progress in the robustness and transparency of AI systems will be achieved.
Safe AI Engineering is a project of the KI Familie. It was initiated and developed by the VDA Leitinitiative autonomous and connected driving and is funded by the Federal Ministry for Economic Affairs and Energy.