Continuous Safety Assurance for AI-based Driving Functions

Safe Development, Assurance, and Operation of AI-based Functions for Connected Automated Mobility

Continuous Safety Assurance is a comprehensive approach to bring AI-based driving functions safely and quickly into operation. Starting from clearly defined safety goals, acceptable residual risk, and a precise ODD specification, we support you in creating guidelines for architecture, metrics, and efficient verification – powered by safe LLMs/agents.

Building on this, structured, evidence-based safety cases with confidence assessment and standards-compliant arguments (ISO PAS 8800, SOTIF, EU AI Act) provide the foundation for approval. For operation, we support you in setting up monitoring and safety performance indicators, governance mechanisms, and triggers for safe updates, as well as the feedback of runtime evidence into your safety argumentation. Rapid prototyping accelerates integration and validation in standardized environments.

This creates a closed loop of development, operation, and learning: risks remain controlled, compliance is demonstrable, and your solution improves iteratively – with measurable safety.

Application areas range from small neural networks on microcontrollers (Embedded AI) – e.g., as virtual sensors in engine control – to complex AI-based functions in the vehicle or infrastructure for perception, trajectory planning, or end-to-end AI architectures from object detection to decision-making.
 

We offer support in the following areas:

  • Development of safety-critical systems: Goals/residual risk/ODD, metrics, safe ML architectures/monitors, efficient V&V, LLMs/agents.
  • Continuous safety assurance in operation: Assumption and SPI monitoring, early mitigation, root cause analysis, safe OTA updates.
  • Evidence-based safety argumentation: Structured safety cases, confidence assessment, compliance; dynamic cases with runtime evidence and re-argumentation.
  • Rapid prototyping: Development of new AI functions or V&V of your AI functions in a standardized architecture using our APIKS framework.

 

Contact us now

© Fraunhofer IKS

Our Mobility Solutions

Fraunhofer IKS offers a wide range of solutions for various sectors and applications. Our customers come from sectors including automotive, rail, and aviation. Jump directly to the solutions that best suit your needs or contact us directly: 

  • Development of Safety-Critical Systems

    The use of AI often enables the realization of new driving functions in the first place. However, deploying AI in safety-critical contexts introduces risks that are difficult to predict.

    Fraunhofer IKS offers a coordinated portfolio of solutions to identify and mitigate these risks to an acceptable level.
     

    In detail, this portfolio includes the following development-phase measures:

    • Definition of acceptable residual risk and acceptance criteria: We support you in defining acceptable residual risk (e.g., using MEM, ALARP, GAMAB, or positive risk balance) and in breaking it down into measurable acceptance criteria for individual components and ML models.
    • Definition of the Operational Design Domain (ODD): Define ODDs based on the latest standards and automatically link your ODD definitions to downstream safety analyses and V&V activities.
    • Selection of relevant safety-related properties and metrics: Develop a traceable concept for how safety requirements can be met through a concrete combination of safety-relevant metrics.
    • Safe ML architectures and monitors: Use proven ML architectures to save time in model selection and automatically derive runtime monitors from your safety argumentation.
    • Efficient V&V: Use methods for targeted evidence collection and build empirically robust safety arguments. Fraunhofer IKS provides methods to integrate performance data – such as object detection accuracy, trajectory planning precision, localization accuracy, and collision rate – into a safety argumentation framework.
    • Regulatory and standards compliance: Use our Assurance Framework for AI High-Risk Systems to achieve compliance with standards such as ISO 21448, ISO 8800, or the EU AI Act.
    • LLMs & Agents: Want to take your safety engineering to the next level? We support the safe use of LLMs and agents in your development processes, e.g., for HARA or requirements engineering.

     

    Contact us now

  • Continuous Safety Assurance During Operation

    After all safety engineering steps have been performed to bring a safe AI-based product to market, it can be assumed that the system’s residual risk is acceptably low.

    However, the obligation to ensure safety does not end there. Several standards and regulations, such as ISO 21448, require runtime monitoring to detect previously unknown risks in the field and take appropriate countermeasures.

    Fraunhofer IKS supports companies in efficiently and safely meeting these requirements through the establishment of a Continuous Safety Engineering process (also known as Dynamic Safety). 
     

    This process addresses the following aspects:

    • Automated monitoring: By automatically generating monitoring components from a formalized safety argumentation, all uncertainty-related assumptions can be easily monitored in the field.
    • Early risk mitigation: With multi-layered monitoring structures, deviations from expected behavior can be detected early through abnormal metric values, enabling proactive mitigation before incidents occur.
    • Efficient root cause analysis: Automated storage of relevant data and feedback to development systems allows efficient root cause identification and rapid design of appropriate countermeasures.
    • Safe updates: We support you in designing solutions to provide fast and safe over-the-air updates for vehicle fleets.

     

    Contact us now

  • Evidence-Based and Continuous Safety Argumentation

    Building on the development-phase measures and runtime assurance, Fraunhofer IKS supports the creation of evidence-based, continuous safety cases that link requirements, V&V results, and runtime data consistently throughout the entire lifecycle. The resulting evidence-based safety arguments for AI-based functions align with ISO PAS 8800, SOTIF, and ISO 26262.

    At the core lies a structured safety case architecture (e.g., GSN) that makes evidence, assumptions, and counterarguments (defeaters) transparent, while quantifying and propagating confidence from evidence to claim. We model risk, complexity, and uncertainties causally (triggering conditions, influencing factors, insufficiencies) and use these models for targeted evidence planning.

    Dynamic safety cases integrate runtime evidence and safety performance indicators for true continuous assurance. Modularity and reusability are ensured via formalized assurance contracts. The result: standards-compliant, decision-ready arguments, supported by prototypes, tools, training, and reviews – proven in automotive, rail, and aviation domains.

    This approach closes the gap between verification & validation (V&V) and safety demonstration – iteratively across the lifecycle – accelerating approvals and establishing a robust foundation for regulatory certification.

     

    Causal risk models and evidence strategy (ISO PAS 8800 / ISO 21448 SOTIF)

    We link causal risk models with a risk-based, traceable evidence strategy for AI functions – compliant with ISO PAS 8800 and ISO 21448 (SOTIF). The result: focused, auditable proofs with clear traceability from system to function level, iteratively over the lifecycle.

    Consulting topics include:

    • Development of causal risk models connecting ML influencing factors and triggering conditions to AI-level insufficiencies and resulting system-level risk
    • Model-based prioritization of scenarios, measures, and tests
    • Derivation of concrete data, model, and test evidence over the lifecycle (per ISO PAS 8800 / ISO 21448)
    • Standard-compliant use of quality metrics, real and synthetic data, and integration of explainability and monitoring evidence
    • Evidence management and tool/prototype integration into existing MLOps processes
    • Compliance mapping and audit preparation (ISO PAS 8800, SOTIF, ISO 26262)

     

    Structured safety argumentation

    We develop structured, modular, and dialectical safety cases and make them regulatorily compliant for efficient approval and faster certification in mobility contexts.

    Consulting topics include:

    • Safety case blueprints (e.g., GSN) for AI components, functions, and systems
    • Dialectical argumentation with defeaters, assumption and evidence management
    • Modularization through assurance contracts and reuse in product lines
    • End-to-end traceability to requirements, tests, and data artifacts at component and system levels
    • Training, guidelines, and templates for teams

     

    Confidence assessment of safety arguments

    Our approach quantifies confidence from evidence to claim, transparently propagating uncertainty for decision-ready statements.

    Consulting topics include:

    • Development of confidence frameworks (qualitative/quantitative) for safety cases
    • Modeling of uncertainty in evidence (measurement, bias, coverage) and claims
    • Determining evidence independence, defeater analysis, and residual risk
    • Modeling of confidence propagation, sensitivity, and stress analyses

     

    Continuous assurance: dynamic safety cases with runtime evidence

    We develop dynamic safety cases, integrate runtime evidence, and continuously evaluate confidence – enabling true continuous assurance during field operation.

    Consulting topics include:

    • Design and implementation of Safety Performance Indicators (SPIs)
    • Pipelines for evidence capture (in-service monitoring, DataOps/MLOps integration)
    • Dashboards and reports for releases, change impacts, and residual risk
    • Governance and triggers for re-argumentation during model/data updates
    • Pilots, tool prototypes, and operational guidelines

     

    Contact us now

  • Rapid Prototyping for Fast and Safe AI Integration

    Fraunhofer IKS supports you in the selection, integration, optimization, testing, and validation of AI components for your (semi-)autonomous function using our APIKS software platform for developing and validating autonomous driving functions.
     

    You want to test an AI function?

    You want to develop, assure, and test an AI function (e.g., perception, trajectory planning, end-to-end architecture) in a standardized environment Benefit from:

    • Reusable functional blocks: Use existing components to accelerate development.
    • Independent development of target blocks: Work independently on specific target modules.
    • Standards-based system architecture and interfaces: Architecture designed for modularity, compliance, and sustainability, ensuring consistent and reliable communication between functional blocks in accordance with ISO 4804 / ISO 5083.
       

    You want to validate an AI function?

    Already have an AI function and want to validate it in a standardized environment? Benefit from:

    • (Co-)Simulations: Easy integration into advanced simulators like CARLA for comprehensive testing and validation.
    • Preconfigured simulation scenarios: Test and validate new functions using ready-made simulation setups to ensure efficiency and reliability.
    • Compatibility with autonomous driving software platforms: Compatible with leading AD software platforms such as AUTOWARE.

     

    Contact us now

Projects in the Field of Mobility

A selection of current and past projects and success stories in the field of mobility can be found here:

 

Safe AI Engineering

How will autonomous driving be made a reality? The Safe AI Engineering project addresses this question by providing the foundations for generally accepted and practical safety certification for AI in the market.

Fraunhofer IKS is working on the formal underpinning of safety argumentation in the project and, with its APIKS platform, is contributing methods for continuous safety engineering and runtime verification.

 

AutoDevSafeOps: Development and operation of safe automotive systems

The MANNHEIM project AutoDevSafeOps, is developing a holistic DevOps approach to meet the high demands of automated and networked vehicles on the already existing software architecture. This approach enables over-the-air updates for safety-critical (driving) functions.

 

safe.trAIn: Safe AI for driverless trains

In the safe.trAIn project, 17 partners are working to establish the groundwork for using AI safely in driverless rail vehicles to make regional rail transport more efficient and sustainable. Fraunhofer IKS is focusing in particular on the proof of safety for AI functions, the robustness of AI and the Operational design domain (ODD).

 

AI assurance: Safe artificial intelligence for autonomous driving

The “KI-Absicherung” project for AI assurance, an initiative by the German Association of the Automotive Industry (VDA), has defined its goal of making the safety of in-car AI systems verifiable. To this end, the project partners are developing a stringent, verifiable chain of arguments for the assurance of AI functions in highly automated vehicles.

 

 

Continental and Fraunhofer IKS make autonomous vehicles safer

Together with Continental, Fraunhofer IKS was able to create a concept for the dynamic distribution of vehicle functions and develop a technical safety concept that describes an implementation of the identified safety requirements.

 

System Health Monitoring for Autonomous Systems

As part of the collaboration with the worldwide development partnership   AUTomotive Open System ARchitecture (AUTOSAR) Fraunhofer IKS together with other members conducts research mainly in the practical application of System Health Management.  

 

More projects

You can find further projects at our project overview:

Mobility in our Safe Intelligence online magazine

 

Safetronic 2025 / 29.9.2025

Safety remains a challenge

Holistic safety for road vehicles is the focus of the annual Safetronic conference (November 12–13, 2025, in Leinfelden-Echterdingen). In a video discussion, Fraunhofer IKS Director Prof. Dr. Mario Trapp and Program comittee member Hans-Leo Ross, CARIAD, talk about the future of mobility.

 

Safetronic 2025: Preview / 14.10.2025

What is an acceptable risk? A proposal

The safety of a product, i.e., not causing harm, is a crucial property for its persistent success on the market and avoidance of legal risks for the manufacturer. However, since perfect safety is typically not achievable, the question what defines acceptable safety and what defines acceptable risks, i.e., risk acceptance criteria (RAC), arises.

 

Interview with Delphine Kervarec-Vicq / 30.9.2025

“Standards are key to understand each other in an extended eco system”

Delphine Kervarec-Vicq, Product Safety Director at Valeo, is a new member of the program committee for Safetronic, the international conference on holistic safety for road vehicles. In an interview with Safe Intelligence online magazine, she talks about her motivation for joining the committee and explains the importance of safety for automated driving.

 

Safetronic 2025: Preview / 15.9.2025

Without safety, new mobility solutions will fall to the wayside

Not only management consultancies, but also other strategic players see the increasing importance of Remote Driving Systems (RDS) for the transportation of the future. But is the concept safe enough? An extensive two-year operation without safety drivers has confirmed the technical feasibility and safety of RDS. And not only that: a comprehensive training framework for Remote Drivers has demonstrably improved performance and safety.

 

Interview with Reinhard Stolle / 12.9.2025

“Bringing the best technology safely into the vehicle”

Ensuring the safety of AI functions in vehicles remains a challenge that must be mastered step by step. However, visible progress is being made on the road to autonomous driving, says Dr. Reinhard Stolle, deputy director of the Fraunhofer IKS. And the success of ChatGPT & Co. is also likely to be leveraged for highly automated vehicles.

 

Safe Intelligence
online magazine

Would you like to know more about the research of Fraunhofer IKS? Then take a look at our Safe Intelligence online magazine. Here you can find out more about our research projects and our employees.

Contact us now

Contact us without obligation using the contact form below. We look forward to receiving your message and will get back to you as soon as possible.

Thank you for your interest in the Fraunhofer IKS.

We have just sent you a confirmation e-mail. If you do not receive an e-mail in the next few minutes, please check your spam folder or send us an e-mail to business.development@iks.fraunhofer.de.

* Required

An error has occurred. Please try again or contact us by e-mail: business.development@iks.fraunhofer.de