Authors:
Keito Miyake
1
;
2
;
Kumi Ozaki
3
;
Akihiro Maehigashi
4
and
Seiji Yamada
1
;
2
Affiliations:
1
Informatics Course, The Graduate University for Advanced Studies (SOKENDAI), Kanagawa, Japan
;
2
National Institute of Informatics, Tokyo, Japan
;
3
Hamamatsu University School of Medicine, Shizuoka, Japan
;
4
Shizuoka University, Shizuoka, Japan
Keyword(s):
Reliance Rate, Artificial Intelligence, Radiology, Human-AI Interaction.
Abstract:
Due to the significant advancements in artificial intelligence(AI), AI technologies are increasingly providing support in various fields. However, even if AI performs at a high level, humans refuse AI for no obvious reason and prefer to solve problems on their own. For instance, experts such as medical professionals tend to be more reluctant to rely on a medical AI’s diagnosis than on a human medical professional. This tendency leads to undertrust in AI and could affect its implementation in society. Thus, this study aims to mitigate the undertrust in AI by providing two functions from the perspective of interaction design: (a) labeling AI outputs as correct or incorrect for training data and (b) modifying AI outputs. To evaluate the effectiveness of these two functions in increasing medical professionals’ reliance on AI, we conducted an experiment involving 25 radiologists and radiographers participating in a muscle-tissue-detection task. A two-way analysis of variance was conducted
to analyze their AI-usage rate. The results indicate that both functions statistically increased reliance on AI. Our novel finding is that when radiologists are enabled to control AI output by labeling results as correct or incorrect, their reliance on AI increases.
(More)