Designing AI for Appropriation Will Calibrate Trust

Zelun Tony Zhang, Yuanting Liu and Andreas Butz

CHI TRAIT '23: Workshop on Trust and Reliance in AI-Assisted Tasks at CHI 2023,

April 2023

abstract

Calibrating users' trust on AI to an appropriate level is widely considered one of the key mechanisms to manage brittle AI performance. However, trust calibration is hard to achieve, with numerous interacting factors that can tip trust into one direction or the other. In this position paper, we argue that instead of focusing on trust calibration to achieve resilient human-AI interactions, it might be helpful to design AI systems for appropriation first, i.e. allowing users to use an AI system according to their intention, beyond what was explicitly considered by the designer. We observe that rather than suggesting end results without human involvement, appropriable AI systems tend to offer users incremental support. Such systems do not eliminate the need for trust calibration, but we argue that they may calibrate users' trust as a side effect and thereby achieve an appropriate level of trust by design.

subject terms: appropriation, artificial intelligence, iterative problem solving, incremental support, trust calibration

url: https://www.researchgate.net/publication/369185707_Designing_AI_for_Appropriation_Will_Calibrate_Trust