Medical AI

Our core competencies and services in the field of medical AI

At Fraunhofer IKS, we conduct research into the development of trustworthy AI-based systems in safety-critical areas such as healthcare. We bundle our core competencies and services on medical AI on this page.

Contact Dr. Narges Ahmidi and Johanna Schmidhuber directly to discuss your specific requirements. We look forward to your request!

Hände einer Ärztin und einer Patientin auf einem Besprechungstisch
  • With time-series analysis, know future events before they happen.​

    The ability to predict future events allows clinicians and patients to intervene in a timely manner, potentially leading to improved patient outcomes, as even a few hours difference can be significant in many cases.​

    Predicting a patient's future using medical AI is possible, but it should be approached with utmost precision in engineering. The underlying mathematical and implicit assumptions of AI - which might not always be appropriately considered during the design process - can lead to functional failures, especially in the complex and high-stakes environment of healthcare.​

    At Fraunhofer IKS, we offer solutions to develop and validate AI systems: to make sure they are reliable, trustworthy, and can handle real-world data challenges including missing data, rare cases, distribution shift, uncertainty, and inherent biases. We ensure that the resulting AI systems are built with all the correct bricks and mortar – i.e. appropriate assumptions, algorithms, and validations. Here are some examples:​

    Here are some examples:​

    • Predicting adverse events or complications before patients’ discharge from hospital
    • Discovering temporal patterns that happen before adverse events​
    • Detecting anomaly in a sequence of observations ​
    • Diagnosing diseases before patients become symptomatic​
    • Predicting the results of expensive/rare diagnostics tests from routine clinical observations​
    • Diagnosing rare diseases​
    • Predicting the recovery trajectory of patients​
    • Predicting patients' response to chosen medications​
    • Predicting future resource consumption from historical data​
    • Predicting future required resources in clinics​
    • Predicting remaining time to system failure for medical devices  
  • Root cause analysis finds answers to all your what-if questions. ​

    It is imperative for medical AI to provide actionable insights to healthcare professionals to effectively optimize their patients’ journey. Example of such actions are recommending surgical interventions, suggesting changes in medication, or outlining follow-up steps to change patients’ future outcome.  ​

    Predicting future with AI is empowering, but predictive models should not be directly used to recommend actionable decisions. Traditional predictive AI techniques depend on identifying associations between patient data, such as symptoms and treatments, and the final outcome. However, most of these methods do not feature explicit built-in mechanisms to discover “cause and effect” relations, which are necessary to identify correct actionable recommendations. ​

    Tailored to your clinical use cases, our state-of-the-art causal inference methodologies allow for the evaluation of patient outcomes based on different treatment decisions. Our methodologies can address all of your what-if clinical questions, using existing retrospective patients’ data:

    • How will my patients’ outcome improve if we switch to new treatment protocol?​
    • Which treatment option is most suitable for my patients? ​
    • Is it better to prescribe medication A or B for certain patients?​
    • What sequence of actions should I take to ensure a stable future for our patients?​
    • What is the risk of complications if I switch treatment for my patients?​
    • How did my past action X impact my patients’ outcomes?  ​
    • Can I discover the action-reaction mechanism between patients’ body and given treatment? ​
  • Measure generalizability, reliability, robustness, bias, out-of-distribution behavior, and uncertainty.​

    AI models utilized in healthcare must be trustworthy! But, what precisely are the dimensions of trustworthiness, and how can we quantitatively measure them?​

    The reported performance of AI in scientific papers often differs significantly from the performance observed in real-life deployments for a variety of reasons. These include hidden mathematical assumptions within AI models, inadequate testing, inaccurate claims about the capabilities of AI systems, and a failure to generalize effectively to out-of-distribution scenarios.

    Our extensive library of validation tools, including CE/FDA certified options, are at your disposal to help you thoroughly evaluate the trustworthiness of your AI models. Our tools can assess various aspects, such as generalizability, reliability, robustness, bias, out-of-distribution behavior, and uncertainty. With our assistance, you can quantify the range of functionality that your AI models can reliably claim. We can help you discover answers to questions such as:​

    • What are the weaknesses of my AI model? ​
    • In which scenarios would my AI system fail? ​
    • How much point-testing is enough before concluding a continuous coverage of functionality for my AI algorithm? ​
    • Is my AI algorithm robust and reliable? ​
    • How can I generate thousands of realistic data scenarios to test generalizability of my AI algorithm? ​
    • Is my AI algorithm fair? ​
    • Should I be concerned about the missing values in my training data?​
    • How should I improve the quality of my training data before feeding it into AI? ​
    • Where are the boundaries of trustful functionality and what is OOD for my AI? ​
    • How should I treat my AI response to OOD scenarios? ​
  • Our services

    We provide various options to accelerate the development of your AI projects:​

    • Idea generation workshops: We provide personalized and expedited sessions to create your AI-driven business model, conduct feasibility studies, explore the state-of-the-art AI solutions available for your specific use case, evaluate risks, specify requirements, and prioritize your next development steps.​
    • Rapid prototyping: We offer access to a vast collection of pre-developed state-of-the-art AI models, allowing for the accelerated development of your initial prototype. ​
    • R&D: Our services extend beyond prototyping to include comprehensive solutions. If you're looking to create a full-scale AI system that goes beyond the current state of the art, validate its trustworthiness, obtain certification, and prepare the necessary technical documentation for CE/FDA approval, we can assist you. Our exceptional team of scientists can help you achieve your research and development goals with the utmost impact, quality, and speed of delivery.​​
    • Trustworthiness validation: If you have already developed your own AI algorithm and seek a second expert opinion to support your claims and assess its trustworthiness, we provide access to our in-house developed suite of tests, including those compliant with CE/FDA standards.​​

More information about medical AI


Fair / 25.-27.4.2023

DMEA 2023

The DMEA is one of Europe's most important events for digital health. Once a year, experts from the digital health industry meet for three days in Berlin. In addition to a comprehensive market overview, the DMEA offers all players a variety of opportunities for an intensive exchange, targeted networking and effective customer acquisition. An we as the Fraunhofer IKS will be there to meet you!


Medical AI

Would you like to learn more about Fraunhofer IKS research on the topic of medical AI? Then take a look at our blog. Here you will find all blog articles on AI in healthcare.