Safety Architecture for AI Components in Production

In modern manufacturing, AI functions significantly increase productivity and enable more flexible production processes. Currently, manufacturing robots are mostly statically programmed and rely on precise specifications of their tasks and environments. This makes them very inflexible to variability in the production process: Deviations in material quality and changing production requirements cannot be compensated for, resulting in costly quality fluctuations and downtimes. Innovative manufacturing approaches, such as human-robot collaboration, require adaptive robots that can dynamically interact with inherently unpredictable humans. 

Integrating AI into the control of robots presents a promising solution. By controlling them based on perception using learned behaviors, robots can reactively respond to their environment, significantly reducing programming efforts. However, the use of AI comes with stringent safety requirements on this so-called control AI to prevent damage to equipment and harm to workers. Currently, AI functions cannot provide the necessary reliability guarantees for safe operation. 

To realize the potential of AI in control systems on behalf of Hitachi, Fraunhofer IKS has developed a safety architecture for safeguarding AI components, in addition to a development process for integrating such components into safety-critical systems. 

Safeguarding Reinforcement Learning for Motion Planning

Abstracted manufacturing scenario involving industrial robots

In the industry project, Fraunhofer IKS investigated an abstracted manufacturing scenario involving industrial robots. A simulated robotic arm was tasked with moving a workpiece (green cube) to a goal position (blue cube) while avoiding collisions with a human worker entering the workspace in different ways. 

The simplicity of this task makes this scenario ideal for analyzing the capabilities and limitations of AI in safety-critical applications. During the project, a trajectory planning function was trained using reinforcement learning, by learning the movement policy in a trial-and-error fashion. However, the training method does not allow for safety guarantees regarding the function's output. This is due to several AI risk factors which affect AI-based functions in all stages of their development lifecycles. Therefore, Fraunhofer IKS developed a systematic approach to identify risks and derive effective measures within the safety architecture. 

AI Safety Envelope Architecture

To identify and assess AI risks, established risk management practices according to ISO 31000 were utilized [1]. From a list of generic risk factors, Fraunhofer IKS successively analyzed their impact on the system under consideration. For the final list of risk factors, Fraunhofer IKS defined state-of-the-art mitigation measures that can reduce the level of risk at run-time. The AI Safety Envelope (AI-E) architecture served as a guide for integrating these measures into the system, which is based on prior work of Fraunhofer IKS [2].

The mitigation techniques were implemented as self-protecting, self-monitoring, or self-checking measures. These three techniques form the AI Safety Envelope, which wraps around the unreliable AI function and transforms it into a dependable subsystem. The AI-E architecture is an implementation of architectural mitigation for AI functions described in ISO/IEC TR 5469 [3]. 

AI Safety Envelope Architecture for Dependable Subsystems
© Fraunhofer IKS
AI Safety Envelope Architecture for Dependable Subsystems

Simulation Testing of the Safety Concept

Fraunhofer IKS conducted a series of simulation-based tests to quantitatively evaluate how effectively the AI-E architecture can mitigate the identified AI risk factors. Using the simulator Webots, 10,000 test scenarios were evaluated to measure both the rate of safely completed scenarios (safety rate) and the rate of successfully completed scenarios (success rate), to investigate the trade-off between safety and availability. 

The research team observed that each implemented mitigation measure improved the system's safety rate. In addition, the combination of all measures into a single envelope led to further enhancements in safety compared to each individual measure, indicating that the different measures address various AI risks and complement each other. 

Regarding the success rate, Fraunhofer IKS observed a trade-off between safety and availability. In some scenarios, the AI-E was overly cautious and intervened, even though the scenario could have been completed safely by the control AI function on its own. However, such an outcome is generally desirable to prevent hazardous situations and the resulting losses. Statistical analyses indicated that the reduction in availability leads to an acceptable loss in productivity. 

Collaboration Between Fraunhofer IKS and Hitachi

The successes of this project would not have been possible without the close collaboration between research and industry. The industry-driven use case allowed Fraunhofer IKS to apply its expertise to a manufacturing problem with high significance. For Hitachi, the project results provide a foundation for implementing AI-based production systems and applying AI functions in other safety-critical areas. 

Statement from Hitachi:

“Hitachi sincerely appreciates the significant results delivered through this collaboration with Fraunhofer IKS. The development and validation of the AI Safety Envelope architecture, demonstrated by substantial improvements in safety rates across extensive simulations, represent a meaningful advancement toward the safe and reliable integration of AI in autonomous control and other critical systems.

The systematic identification and mitigation of AI-related risks, aligned with international safety standards, provide a robust foundation for applying advanced AI functions not only in manufacturing, but also in a wide range of safety-critical domains. We look forward to leveraging these achievements as a basis for further innovation and practical implementation in the future.” 

Chief Researcher, Hitachi Ltd. R&D Group, Autonomous Control Research Department  
Satoshi Otsuka  

Benefits and Resolved Problems

The primary benefit of the solution lie in the development of a safety architecture that enables the safe integration of AI-based control in industrial robots. The developed architecture enhances the reliability and safety of AI-driven systems, which is crucial for their application in safety-critical environments. 

The central problems addressed include: 

  1. Safety Requirements: The developed safety architecture ensures the safe use of AI functions by systematically identifying and mitigating potential risks. 
  2. Lack of Reliability in AI Functions: The implementation of the AI Safety Envelope increases the reliability of AI functions, enabling their use in safety-critical applications. 
  3. Lack of Flexibility: Enabling the use of AI functions allows robots to dynamically respond to their environment, increasing adaptability to variabilities in the production process.

This success story demonstrates how innovative approaches in AI safety can usher in a new era in manufacturing

[1] ISO 31000:2018, “Risk management – Guidelines,” International Organization for Standardization, 2018.

[2] G. Weiss, P. Schleiss, D. Schneider, and M. Trapp, “Towards Integrating Undependable Self-Adaptive Systems in Safety-Critical Environments,” in Proceedings of the 13th International Conference on Software Engineering for Adaptive and Self-Managing Systems, May 2018, pp. 26–32.

[3] ISO/IEC TR 5469:2024, “Artificial intelligence – Functional safety and AI systems,” International Organization for Standardization, 2024.

More projects in the field of production

 

Infrastructure sensors for safe, automated forklifts

Together with Hitachi, Fraunhofer IKS has investigated whether infrastructure sensors increase safety in a warehouse with automated forklifts. To do this, the researchers created a simulation framework for the movements of automated guided vehicles in warehouses based on Webots.

 

Safeguarding autonomous mobile robotic systems

Fraunhofer IKS and Magazino GmbH are conducting the research project “RoboDevOps – Continuous development and safeguarding of autonomous, mobile robotic systems” to research new DevOps concepts and evaluate them based on specific scenarios.

 

Simple AI integration for Industry 4.0

In the joint project REMORA, Fraunhofer IKS works on the simple integration of AI services in Industry 4.0. Its goal is to simplify the integration of AI for the real-time analysis of machine data and to develop tools for high-quality, dynamic machine data.

Contact us now

Would you also like to collaborate with Fraunhofer IKS? Contact us without obligation using the contact form below. We look forward to receiving your message and will get back to you as soon as possible.

Thank you for your interest in the Fraunhofer IKS.

We have just sent you a confirmation e-mail. If you do not receive an e-mail in the next few minutes, please check your spam folder or send us an e-mail to business.development@iks.fraunhofer.de.

* Required

An error has occurred. Please try again or contact us by e-mail: business.development@iks.fraunhofer.de