AI Act and High-Risk Systems

The Artificial Intelligence Act (AI Act) by the European Commission wants to establish a common regulatory and legal framework for artificial intelligence. The AI Act uses a risk-based approach and sets out a series of escalating legal and technical obligations depending on whether the AI product or service is considered low, medium or high risk, while some applications of AI will be banned outright.

High-risk systems, such as AI for autonomous vehiclesmedical devices or industrial automation, pose significant threat to the health, safety or the fundamental rights of individuals. They require a mandatory conformity assessment before being launched on the market, carried out as a self-assessment by the provider. Compliance with the AI Act will be mandatory for any organization providing or deploying an AI-based system in the near future. As the AI Act covers a wide range of use cases, it is more likely than not that a company will fall within the scope of the AI Act.

Closing the gap between expectations and reality

The new AI Act raises these questions:

  • Does a given AI system fulfil all the criteria required to be considered trustworthy?
  • What impact will the AI function have on the overall risk for a given operational domain?

Answering these questions requires considerable expertise in safety engineering, understanding robustness metrics and their impact on safety as well as integrating such complex systems into a broader societal context. In order to meet the regulations for AI within the AI Act there is a need for clearly defined and measurable criteria for trustworthy AI. Fraunhofer IKS designs systematic processes, methods and tools for collecting evidence that all necessary requirements and criteria are met leading to standardization which bridges the gap between the high-level regulations within the AI Act and detailed requirements for AI for the application.

Our offerings: Trainings and workshops

Fraunhofer IKS offers training courses on EU AI Act and safety assurance. Fraunhofer IKS benefits from an international scientific network on safety assurance, e.g. the University of York and the Safety-Critical Systems Club, the Technical University of Munich (TUM) and the Ludwig-Maximilians-Universität München (LMU).

 

Training

EU AI Act and High-Risk Systems

Get your AI systems ready for the EU AI Act with our EU AI Act training. It offers an overview of the EU AI Act and its implications and dives deep into the compliance approach and the verification methodology.



 

In-house training

Machine Learning for safety-critical automotive functions

Would you like to use machine learning (ML) without compromising the safety of vehicle functions?

Our training course covers the fundamental principles of arguing the safety of automotive functions that make use of machine learning technologies. In particular, we focus on the practical application of the ISO 21448 and ISO PAS 8800 standards.

 

In-house training

Bridge the gap: Enabling ML for safety-critical industrial applications

Do you want to use machine learning (ML) in safety-critical industrial applications to automate your production or make it more efficient? You are not sure how to bridge the gap between the core safety principles? Then book our in-house training “Enabling ML for safety-critical industrial applications” for your company.

References and more information

Lead on ISO PAS 8800

The standard defines safety-related properties and risk factors impacting the insufficient performance and malfunctioning behaviour of Artificial Intelligence (AI) within a road vehicle context.

Trustworthy Digital Health

Fraunhofer IKS facilitates prediction & decision support by developing trustworthy AI models using clinical data. 

 

Whitepaper

The European Artificial Intelligence Act - Overview and Recommendations for Compliance

Fraunhofer IKS provides an overview of the key provisions outlined in the EU AI Act in this whitepaper. It adresses risk-classification, stakeholder considerations, and requirements, with a focus on safety-critical and high-risk AI systems.

Safe Intelligence online magazine

Learn more about research topics on our Safe Intelligence online magazine.

AI standardization

Under the auspices of Fraunhofer IKS, an international working group is developing norms and standards that deal with artificial intelligence from a safety perspective: the ISO/PAS 8800 standard.

 

safe.trAIn: Safe AI for driverless trains

In the safe.trAIn project, 17 partners are working to establish the groundwork for using AI safely in driverless rail vehicles to make regional rail transport more efficient and sustainable. Fraunhofer IKS is focusing in particular on the proof of safety for AI functions, the robustness of AI and the Operational design domain (ODD).

 

AI assurance: Safe artificial intelligence for autonomous driving

The “KI-Absicherung” project for AI assurance, an initiative by the German Association of the Automotive Industry (VDA), has defined its goal of making the safety of in-car AI systems verifiable. To this end, the project partners are developing a stringent, verifiable chain of arguments for the assurance of AI functions in highly automated vehicles.

 

 

White paper

Complexity and uncertainty in the safety assurance and regulation of automated driving

Together with the University of York, Fraunhofer IKS has published a white paper presenting a new approach to assure the safety of cognitive cyber-physical systems (CPS), including automated driving systems.

Contact us now

Contact us without obligation using the contact form below. We look forward to receiving your message and will get back to you as soon as possible.

Thank you for your interest in the Fraunhofer IKS.

We have just sent you a confirmation e-mail. If you do not receive an e-mail in the next few minutes, please check your spam folder or send us an e-mail to business.development@iks.fraunhofer.de.

* Required

An error has occurred. Please try again or contact us by e-mail: business.development@iks.fraunhofer.de