6 CONCLUSION
In this paper we reported our investigations into XAI.
We focused on a decision tree for a sample dataset as a
use case to illustrate existing XAI tools/platforms. We
discussed and demonstrated how an explanation can
be constructed at global and local levels, as well as
how such explanations can be further enhanced by us-
ing counterfactural datapoints. Finally, we described
our evaluation methodology and provided an analysis
of participant feedback. Although the work described
here is preliminary, we believe it provides some useful
starting points for researchers who are new to the field
of XAI. Our results show that developing a proper and
easily accessible XAI system and interface is a non-
trivial task. Deep understanding of the AI system be-
ing used, the application domain, and user groups are
all important and may have a significant impact on the
quality and acceptance of research outcomes. There
are several possible avenues for future work:
• Explanation may be more understandable to hu-
mans if they incorporate natural language genera-
tion (NLG) techniques. When implementing XAI
techniques on specific cases, NLG may be used to
improve language in final explanations.
• We only considered counterfactual explanations
in the context of binary classification models. Ad-
ditional methods may be adopted to support multi-
class classification models.
• We only consider explanations for a single (gen-
eral) class of stakeholder. However, explanations
tailored to other specific classes of stakeholder
may be achieved by incorporating preferences or
other background information.
ACKNOWLEDGEMENTS
This work received funding from the EPSRC CHAI
project (EP/T026820/1). The authors thank Marco
Tulio Correia Ribeiro for help with LIME.
REFERENCES
Arrieta, A. B. et al. (2020). Explainable artificial intel-
ligence (XAI): Concepts, taxonomies, opportunities
and challenges toward responsible AI. Information
Fusion, 58:82–115.
Biran, O. and Cotton, C. (2017). Explanation and justifica-
tion in machine learning: A survey. In Proceedings of
the IJCAI’17 Workshop on Explainable Artificial In-
telligence (XAI’17), pages 8–13.
Boehmke, B. and Greenwell, B. (2019). Interpretable ma-
chine learning. In Hands-On Machine Learning with
R. Chapman and Hall/CRC.
Das, A. and Rad, P. (2020). Opportunities and challenges
in explainable artificial intelligence (XAI): A survey.
arXiv:2006.11371.
Davis, R., Buchanan, B., and Shortliffe, E. (1977). Produc-
tion rules as a representation for a knowledge-based
consultation program. Artificial Intelligence, 8(1):15–
45.
Friedman, J. H. (2001). Greedy function approximation: a
gradient boosting machine. Annals of Statistics, pages
1189–1232.
Gleicher, M. (2016). A framework for considering compre-
hensibility in modeling. Big Data, 4(2):75–88.
Goodman, B. and Flaxman, S. (2017). European Union reg-
ulations on algorithmic decision-making and a “right
to explanation”. AI Magazine, 38(3):50–57.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Gian-
notti, F., and Pedreschi, D. (2018). A survey of meth-
ods for explaining black box models. ACM Computing
Surveys, 51(5):1–42.
Gunning, D. and Aha, D. (2019). DARPA’s explainable
artificial intelligence (XAI) program. AI Magazine,
40(2):44–58.
Hesslow, G. (1988). The problem of causal selection.
In Hilton, D. J., editor, Contemporary science and
natural explanation: Commonsense conceptions of
causality. New York University Press.
Hoffman, R. R., Mueller, S. T., Klein, G., and Litman, J.
(2018). Metrics for explainable AI: Challenges and
prospects. arXiv:1812.04608.
Kanamori, K., Takagi, T., Kobayashi, K., and Arimura,
H. (2020). DACE: Distribution-aware counterfac-
tual explanation by mixed-integer linear optimization.
In Proceedings of the 29th International Joint Con-
ference on Artificial Intelligence (IJCAI’20), pages
2855–2862.
Lewis, D. (1987). Causal explanation. In Lewis, D., edi-
tor, Philosophical Papers Volume II, pages 214–240.
Oxford University Press.
Lipton, P. (1990). Contrastive explanation. Royal Institute
of Philosophy Supplements, 27:247–266.
Lipton, Z. C. (2018). The mythos of model interpretability:
In machine learning, the concept of interpretability is
both important and slippery. ACM Queue, 16(3):31–
57.
Lucic, A., Oosterhuis, H., Haned, H., and de Rijke, M.
(2019). FOCUS: Flexible optimizable counterfactual
explanations for tree ensembles. arXiv:1911.12199.
Miller, T. (2019). Explanation in artificial intelligence: In-
sights from the social sciences. Artificial Intelligence,
267:1–38.
Mothilal, R. K., Sharma, A., and Tan, C. (2020). Explain-
ing machine learning classifiers through diverse coun-
terfactual explanations. In Proceedings of the 2020
Conference on Fairness, Accountability, and Trans-
parency (FAccT’20), pages 607–617.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). “Why
should i trust you?” Explaining the predictions of any
ICAART 2022 - 14th International Conference on Agents and Artificial Intelligence
526