Advances in innovation with artificial intelligence
Automated machines must be able to respond quickly and dependably to their environment. This capability is enhanced through machine learning. Still, AI applications are not inherently perfect. Errors in the selection of suitable training data or when generating or processing data can lead to dangerous flaws in the system that the AI technology is incapable of recognizing and avoiding. The focus of the research activities of the Fraunhofer Institute for Cognitive Systems IKS is to validate the dependability of AI technologies through extendable and adaptable software architectures.
What’s crucial is that the system in which the AI technology is embedded operates in a completely safe and dependable manner, particularly when safety-critical applications are involved. The challenges in validating machine learning from data are decidedly different from the challenges that arise with conventional programmed software. The training data that is utilized plays an important role in the quality of the neural network that evolves. If the data is not representative of the numerous situations that the system will eventually be confronted with, this leads to an inadequate model and bad decisions. In order for the model to be suitable for unlearned data, it has to be robust and capable of abstraction. In other words, the model cannot be too closely tied to the training data, which leads to overfitting and a model that does not perform abstractly enough for new data. On the other hand, what also has to be avoided is underfitting, or the creation of a model that is not sophisticated enough to precisely describe the structure of the data.