HCML Explainability

HCML Explainability

Explainability in AI-based decision making

HCML Explainability

As artificial intelligence (AI) and machine learning algorithms become more prevalent in society, the interpretability and comprehensibility of the results of these algorithms is becoming increasingly important. Based on human-centered machine learning (HCML), we are investigating and developing new methods of explaining these algorithms to better support users in their decision-making.

Project description

Even for machine learning experts, many learning-based algorithms responsible for much of the progress in AI are not easily explained. Despite the increasing number of publications, there is still a gap between what the research community produces and what society needs. At the same time, different user groups have different requirements for the form and content of the explanations provided.

In this project, we will investigate different explanation methods in terms of their reliability in describing algorithm behavior as well as their usefulness in decision making for different user groups based on user studies. Based on the findings, we develop new explanation techniques that should meet the needs of users. We focus mainly on the use cases of fraud detection, aviation, and image-based medical diagnostics.

Research contribution

The effectiveness of explanatory methods is strongly dependent on a variety of human factors, and the interrelationships are often not yet sufficiently investigated. We contribute in this regard, for example, in the areas of fraud detection and image-based diagnostics by evaluating visual explanatory methods that highlight components important for decision making in controlled experiments with users.

While such experiments are essential for investigating causal relationships, controlled laboratory conditions are often too simplistic to adequately represent reality. For this reason, we additionally investigate more naturalistic decision scenarios, for example in the context of intelligent cockpit assistance systems. By means of user-centered methods, we contribute here to a more holistic view on the topic of AI explainability.

Project duration

01.04.2022 – 31.03.2023

Dr. Yuanting Liu

Your contact

Dr. Yuanting Liu

+49 89 3603522 427
liu@fortiss.org

Project partner