Explainability in AI-based decision making
Even for machine learning experts, many learning-based algorithms responsible for much of the progress in AI are not easily explained. Despite the increasing number of publications, there is still a gap between what the research community produces and what society needs. At the same time, different user groups have different requirements for the form and content of the explanations provided.
In this project, we will investigate different explanation methods in terms of their reliability in describing algorithm behavior as well as their usefulness in decision making for different user groups based on user studies. Based on the findings, we develop new explanation techniques that should meet the needs of users. We focus mainly on the use cases of fraud detection, aviation, and image-based medical diagnostics.
The effectiveness of explanatory methods is strongly dependent on a variety of human factors, and the interrelationships are often not yet sufficiently investigated. We contribute in this regard, for example, in the areas of fraud detection and image-based diagnostics by evaluating visual explanatory methods that highlight components important for decision making in controlled experiments with users.
While such experiments are essential for investigating causal relationships, controlled laboratory conditions are often too simplistic to adequately represent reality. For this reason, we additionally investigate more naturalistic decision scenarios, for example in the context of intelligent cockpit assistance systems. By means of user-centered methods, we contribute here to a more holistic view on the topic of AI explainability.
01.04.2022 – 31.03.2023