Authors:
Yazan Mualla
1
;
Igor H. Tchappi
1
;
2
;
Amro Najjar
3
;
Timotheus Kampik
4
;
Stéphane Galland
1
and
Christophe Nicolle
5
Affiliations:
1
CIAD, Univ. Bourgogne Franche-Comté, UTBM, 90010 Belfort, France
;
2
Faculty of Sciences, University of Ngaoundere, B.P. 454 Ngaoundere, Cameroon
;
3
AI-Robolab/ICR, Computer Science and Communications, University of Luxembourg, 4365 Esch-sur-Alzette, Luxembourg
;
4
Department of Computing Science, Umeå University, 90187 Umeå, Sweden
;
5
CIAD, Univ. Bourgogne Franche-Comté, UB, 21000 Dijon, France
Keyword(s):
Explainable Artificial Intelligence, Human-computer Interaction, Agent-based Simulation, Intelligent Aerial Transport Systems.
Abstract:
The communication between robots/agents and humans is a challenge, since humans are typically not capable of understanding the agent’s state of mind. To overcome this challenge, this paper relies on recent advances in the domain of eXplainable Artificial Intelligence (XAI) to trace the decisions of the agents, increase the human’s understandability of the agents’ behavior, and hence improve efficiency and user satisfaction. In particular, we propose a Human-Agent EXplainability Architecture (HAEXA) to model human-agent explainability. HAEXA filters the explanations provided by the agents to the human user to reduce the user’s cognitive load. To evaluate HAEXA, a human-computer interaction experiment is conducted, where participants watch an agent-based simulation of aerial package delivery and fill in a questionnaire that collects their responses. The questionnaire is built according to XAI metrics as established in the literature. The significance of the results is verified using Ma
nn-Whitney U tests. The results show that the explanations increase the understandability of the simulation by human users. However, too many details in the explanations overwhelm them; hence, in many scenarios, it is preferable to filter the explanations.
(More)