AI Act and High-Risk Systems

The Artificial Intelligence Act (AI Act) by the European Commission wants to establish a common regulatory and legal framework for artificial intelligence. The AI Act uses a risk-based approach and sets out a series of escalating legal and technical obligations depending on whether the AI product or service is considered low, medium or high risk, while some applications of AI will be banned outright.

High-risk systems, such as AI for autonomous vehicles or medical devices, pose significant threat to the health, safety or the fundamental rights of individuals. They require a mandatory conformity assessment before being launched on the market, carried out as a self-assessment by the provider. Compliance with the AI Act will be mandatory for any organization providing or deploying an AI-based system in the near future. As the AI Act covers a wide range of use cases, it is more likely than not that a company will fall within the scope of the AI Act.

Closing the gap between expectations and reality

The new AI Act raises these questions:

  • Does a given AI system fulfil all the criteria required to be considered trustworthy?
  • What impact will the AI function have on the overall risk for a given operational domain?

Answering these questions requires considerable expertise in safety engineering, understanding robustness metrics and their impact on safety as well as integrating such complex systems into a broader societal context. In order to meet the regulations for AI within the AI Act there is a need for clearly defined and measurable criteria for trustworthy AI. Fraunhofer IKS designs systematic processes, methods and tools for collecting evidence that all necessary requirements and criteria are met leading to standardization which bridges the gap between the high-level regulations within the AI Act and detailed requirements for AI for the application.

Our offer: AI Act workshops and trainings

AI Act workshops

A proactive confirmation assessment of AI systems provides guidelines for design, development and use of an AI system, will mitigate the risks of AI failures and can prevent reputational and financial damage by avoiding liability issues. Our AI Act workshops offer systematic support for compliance of your AI system and include the following topics:

  • Establish continuous risk management system by identifying, analyzing and estimating known and foreseeable risks and providing mitigation measures
  • Ensure high-quality training, validation and testing data minimizing discrimination and bias in data sets
  • Provide methods for design and evaluation of ML models tailored to your application requirements (e.g., accuracy, robustness, prediction certainty, transparency and explainability)
  • Integrate monitoring and logging capabilities and provide automated technical documentation
  • Ensure post-market monitoring and verification of safety and performance properties during the whole lifecycle
  • Support the registration in EU’s future database on high-risk AI systems

 

Training course "Machine Learning for Safety Experts"

Our training course "Machine Learning for Safety Experts" covers the fundamental principles of arguing the safety of automotive functions that make use of machine learning technologies. In particular, the course focuses on the impact of machine learning on the “safety of the intended functionality” as described by the standard ISO 21448. The course includes the following topics and can be adapted to your use case and requirements.

  • Introduction to machine learning on the basis of an example function and publicly available data
  • Safety challenges of machine learning
  • Short introduction to relevant safety standards and their impact on machine learning
  • Safety lifecycle for machine learning functions
  • Derivation of safety requirements for ML functions with particular focus on the definition of safety-related properties such as accuracy, robustness, prediction certainty, transparency and explainability
  • The impact of training and test data on safety
  • Methods for evaluating the performance of the ML function against its safety requirements
  • Safety analysis applied to machine learning
  • Architectural measures to improve the safety of ML functions
  • Design-time and operation time methods for ensuring the safety of ML functions
  • Assurance arguments for machine learning

Are you interested in a workshop or training? Contact us directly to discuss your needs: bd@iks.fraunhofer.de

Fraunhofer IKS benefits from an international scientific network on safety assurance, e.g. the University of York and the Safety-Critical Systems Club, the Technical University of Munich (TUM) and the Ludwig-Maximilians-Universität München (LMU).

References and more information

Lead on ISO PAS 8800

The standard defines safety-related properties and risk factors impacting the insufficient performance and malfunctioning behaviour of Artificial Intelligence (AI) within a road vehicle context.

Medical AI

At Fraunhofer IKS, we conduct research into the development of trustworthy AI-based systems in safety-critical areas such as healthcare. Learn more about our offers.

 

Whitepaper

The European Artificial Intelligence Act - Overview and Recommendations for Compliance

Fraunhofer IKS provides an overview of the key provisions outlined in the EU AI Act in this whitepaper. It adresses risk-classification, stakeholder considerations, and requirements, with a focus on safety-critical and high-risk AI systems.

Safe Intelligence Blog

Learn more about research topics on our Safe Intelligence Blog.

AI standardization

Under the auspices of Fraunhofer IKS, an international working group is developing norms and standards that deal with artificial intelligence from a safety perspective: the ISO/PAS 8800 standard.

 

safe.trAIn: Safe AI for driverless trains

In the safe.trAIn project, 17 partners are working to establish the groundwork for using AI safely in driverless rail vehicles to make regional rail transport more efficient and sustainable. Fraunhofer IKS is focusing in particular on the proof of safety for AI functions, the robustness of AI and the Operational design domain (ODD).

 

AI assurance: Safe artificial intelligence for autonomous driving

The “KI-Absicherung” project for AI assurance, an initiative by the German Association of the Automotive Industry (VDA), has defined its goal of making the safety of in-car AI systems verifiable. To this end, the project partners are developing a stringent, verifiable chain of arguments for the assurance of AI functions in highly automated vehicles.

 

 

White paper

Complexity and uncertainty in the safety assurance and regulation of automated driving

Together with the University of York, Fraunhofer IKS has published a white paper presenting a new approach to assure the safety of cognitive cyber-physical systems (CPS), including automated driving systems.