Safe AI Engineering

AI engineering as an enabler for a well-founded safety argumentation throughout the entire life cycle of an AI function

How will autonomous driving be made a reality? The Safe AI Engineering project addresses this question by providing the foundations for generally accepted and practical safety certification for AI in the market.

AI in automated vehicles must be safe

The Safe AI Engineering project investigates how artificial intelligence (AI) can be safely and traceably integrated into safety-critical systems, especially automated vehicles.

The aim is to develop methods and safety verifications which ensure both the correct operation and continuous monitoring of AI-based functions. This is intended to enable official approvals and encourage public acceptance

The challenge: AI decisions lack traceability

The use of AI in safety-critical systems is complicated by several challenges. These include:

  • the complexity of modern AI models,
  • the limited traceability of decisions made by AI,
  • the variety of realistic and synthetic driving data, and
  • compliance with and linking of established safety standards.

In addition to this, changing environmental conditions, complex and, to date, insufficiently investigated interrelationships as well as the continuous development of AI render the provision of safety verifications difficult.

Iterative approach the safety argumentation of AI functions

The Safe AI Engineering project takes an iterative, hands-on approach to meeting these challenges: on the basis of a specific AI perception function for pedestrian detection, the Safe AI Engineering methodology is developed and evaluated in three use cases of increasing complexity. In doing so, both real and synthetic data are used, quality metrics are assessed in accordance with relevant safety standards, and new methods for the data processing, monitoring and explainability of AI systems are applied. 

The aim of the project is obtaining a safety argumentation of AI functions for autonomous driving throughout the entire lifecycle. This includes the planning, development, testing, deployment, monitoring and embedding in the overall system. 

Test methods and standards (ISO 26262, SOTIF, ISO/PAS 8800) are also integrated. This enables the Safe AI Engineering project to close the gap between verification and validation as well as safety certifications for AI.

Fraunhofer IKS in the Safe AI Engineering project

Co-leadership for the formal underpinning of safety argumentation 

In this project, Fraunhofer IKS is responsible for co-leading the project package for the formal underpinning of the safety argumentation.

The focus is on deriving technical safety requirements and acceptance criteria from standards, as well as analysing complex cause-and-effect relationships and their impact on uncertainties during the safety assessment of AI-based systems. To this end, an uncertainty-driven approach for arguments is developed which formulates the safety requirements as design contracts and demonstrates their fulfilment at the component and functional levels. 

Fraunhofer IKS is also developing a formalised approach which can clearly demonstrate the integrity, validity and confidence of the safety argumentation and quantify the degree of contractual fulfilment. This enables a transparent and traceable assessment of AI safety.  

To enable a smooth transition of the concepts to the existing hardware platforms, Fraunhofer IKS is also focussing on the integration and validation of the AI system created as part of the project. This will, for example, enable an early validation of the developed concepts on a virtual platform.

Our APIKS plattform supports a continuous safety engineering 

Fraunhofer IKS further contributes methods for continuous safety engineering and safety with its APIKS platform for the runtime.

APIKS enables a rapid prototyping and system-level validation by linking a modular ROS2-based stack with simulation tools such as CARLA. This supports the iterative development, integration and continuous monitoring of AI-based perception functions under realistic conditions and helps to underpin safety claims. This creates the trust necessary for the approval and the public acceptance prior to the integration into actual hardware. 

Safe AI Engineering paves the way for regulatory approval

Through its structured approach, the Safe AI Engineering project creates solid building blocks for a safety certification of AI functions in vehicles. The Safe AI Engineering methodology developed is intended to serve as a basis for a uniform, traceable proof for the safety of AI in automated vehicles and thus contribute to regulatory approval. Realistic demonstrators (including a virtual simulation environment) will also be developed and measurable progress in the robustness and transparency of AI systems will be achieved.

Safe AI Engineering: facts and figures

 

Project details

Safe AI Engineering is a project of the KI Familie. It was initiated and developed by the VDA Leitinitiative autonomous and connected driving and is funded by the Federal Ministry for Economic Affairs and Energy.


 

Project timeline: March 2025 – February 2028 
 

Total budget: EUR 34.5 million 
 

24 project partners: DXC Luxoft GmbH, German Aerospace Center, Akkodis Germany GmbH, AVL Deutschland GmbH, German Federal Highway and Transport Research Institute, Bertrandt Ing.-Büro GmbH, Robert Bosch GmbH, Capgemini Engineering Deutschland S.A.S. & Co KG, Cariad SE, Continental Automotive Technologies GmbH, Fraunhofer-Gesellschaft e.V., FZI Research Center for Information Technology, Intel Deutschland GmbH, Karlsruhe Institute of Technology (KIT), Mercedes-Benz AG, Opel Automobile GmbH, Porsche AG, Spleenlab GmbH, TU Berlin, TU Braunschweig, TÜV AI.Lab GmbH, Valeo Schalter und Sensoren GmbH, ZF Friedrichshafen AG

 

Previous project

Demonstrating AI safety

The Safe AI Engineering project builds on the results of the AI Assurance project. Fraunhofer IKS also participated in this project, in which a rigorous and verifiable chain of safety arguments for AI functions in highly automated vehicles was developed.

More information

 

Autonomous Driving

Will the cars of the future drive autonomously? This vision of the future will only become reality if autonomous driving is safe. The Fraunhofer Institute for Cognitive Systems IKS is therefore working on adaptive software architectures for automobiles.

 

Safety Engineering

The electronics in vehicles and industrial plants are becoming increasingly complex. Safety engineering plays an important role in meeting the high safety requirements. That is why Fraunhofer IKS conducts research in this area, which is important for many branches of industry.

 

Safety Assurance

Fraunhofer IKS is researching the requirements that AI must meet in order to be sufficiently safe. We are also working on safety cases to verify the safety of the overall system.

Funding logo of the Federal Ministry for Economic Affairs and Energy

 

Safe AI Engineering is a project of the KI Familie. It was initiated and developed by the VDA Leitinitiative autonomous and connected driving and is funded by the Federal Ministry for Economic Affairs and Energy.