Authors:
Johannes Schneider
1
;
Christian Meske
2
and
Michalis Vlachos
3
Affiliations:
1
University of Liechtenstein, Vaduz, Liechtenstein
;
2
University of Bochum, Bochum, Germany
;
3
University of Lausanne, Lausanne, Switzerland
Keyword(s):
Explainability, Artificial Intelligence, Deception, Detection.
Abstract:
Artificial intelligence (AI) comes with great opportunities but can also pose significant risks. Automatically generated explanations for decisions can increase transparency and foster trust, especially for systems based on automated predictions by AI models. However, given, e.g., economic incentives to create dishonest AI, to what extent can we trust explanations? To address this issue, our work investigates how AI models (i.e., deep learning, and existing instruments to increase transparency regarding AI decisions) can be used to create and detect deceptive explanations. As an empirical evaluation, we focus on text classification and alter the explanations generated by GradCAM, a well-established explanation technique in neural networks. Then, we evaluate the effect of deceptive explanations on users in an experiment with 200 participants. Our findings confirm that deceptive explanations can indeed fool humans. However, one can deploy machine learning (ML) methods to detect seeming
ly minor deception attempts with accuracy exceeding 80% given sufficient domain knowledge. Without domain knowledge, one can still infer inconsistencies in the explanations in an unsupervised manner, given basic knowledge of the predictive model under scrutiny.
(More)