
of neural network explanations. Information Fusion,
81:14–40.
Barr, B., Xu, K., Silva, C., Bertini, E., Reilly, R., Bruss,
C. B., and Wittenbach, J. D. (2020). Towards ground
truth explainability on tabular data. arXiv preprint
arXiv:2007.10532.
Coroama, L. and Groza, A. (2022). Evaluation metrics
in explainable artificial intelligence (xai). In Inter-
national Conference on Advanced Research in Tech-
nologies, Information, Innovation and Sustainability,
pages 401–413. Springer.
Delaney, E., Greene, D., and Keane, M. T. (2021). Instance-
based counterfactual explanations for time series clas-
sification. In International inproceedings on Case-
Based Reasoning, pages 32–47. Springer.
Dwivedi, R., Dave, D., Naik, H., Singhal, S., Omer, R., Pa-
tel, P., Qian, B., Wen, Z., Shah, T., Morgan, G., et al.
(2023). Explainable ai (xai): Core ideas, techniques,
and solutions. ACM Computing Surveys, 55(9):1–33.
Guidotti, R. (2021). Evaluating local explanation methods
on ground truth. Artificial Intelligence, 291:103428.
˙
Ic¸, Y. T. and Yurdakul, M. (2021). Development of a new
trapezoidal fuzzy ahp-topsis hybrid approach for man-
ufacturing firm performance measurement. Granular
Computing, 6(4):915–929.
Jeyakumar, J. V., Noor, J., Cheng, Y.-H., Garcia, L., and
Srivastava, M. (2020). How can i explain this to you?
an empirical study of deep neural network explanation
methods. Advances in Neural Information Processing
Systems, 33:4211–4222.
Kommiya Mothilal, R., Mahajan, D., Tan, C., and Sharma,
A. (2021). Towards unifying feature attribution and
counterfactual explanations: Different means to the
same end. In Proceedings of the 2021 AAAI/ACM
Conference on AI, Ethics, and Society, pages 652–
663.
Kumar, I. E., Venkatasubramanian, S., Scheidegger, C., and
Friedler, S. (2020). Problems with shapley-value-
based explanations as feature importance measures.
In International in proceedings on Machine Learning,
pages 5491–5500. PMLR.
Laugel, T., Renard, X., Lesot, M.-J., Marsala, C., and De-
tyniecki, M. (2018). Defining locality for surrogates in
post-hoc interpretablity. In Workshop on Human Inter-
pretability for Machine Learning (WHI)-International
Conference on Machine Learning (ICML).
Marc
´
ılio, W. E. and Eler, D. M. (2020). From explanations
to feature selection: assessing shap values as feature
selection mechanism. In 2020 33rd SIBGRAPI confer-
ence on Graphics, Patterns and Images (SIBGRAPI),
pages 340–347. Ieee.
Markus, A. F., Kors, J. A., and Rijnbeek, P. R. (2021).
The role of explainability in creating trustworthy ar-
tificial intelligence for health care: a comprehensive
survey of the terminology, design choices, and eval-
uation strategies. Journal of Biomedical Informatics,
113:103655.
Miller, T. (2019). Explanation in artificial intelligence: In-
sights from the social sciences. Artificial intelligence,
267:1–38.
Mothilal, R. K., Sharma, A., and Tan, C. (2020). Explaining
machine learning classifiers through diverse counter-
factual explanations. In Proceedings of the 2020 con-
ference on fairness, accountability, and transparency,
pages 607–617.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer,
P., Weiss, R., Dubourg, V., Vanderplas, J., Passos,
A., Cournapeau, D., Brucher, M., Perrot, M., and
Duchesnay, E. (2011). Scikit-learn: Machine learning
in Python. Journal of Machine Learning Research,
12:2825–2830.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why
should i trust you?” explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and
data mining, pages 1135–1144.
Schleich, M., Geng, Z., Zhang, Y., and Suciu, D. (2021).
Geco: quality counterfactual explanations in real time.
Proceedings of the VLDB Endowment, 14(9):1681–
1693.
Sundararajan, M. and Najmi, A. (2020). The many shapley
values for model explanation. In International confer-
ence on machine learning, pages 9269–9278. PMLR.
van der Waa, J., Nieuwburg, E., Cremers, A., and Neer-
incx, M. (2021). Evaluating xai: A comparison of
rule-based and example-based explanations. Artificial
Intelligence, 291:103404.
Vlassopoulos, G., van Erven, T., Brighton, H., and
Menkovski, V. (2020). Explaining predictions by
approximating the local decision boundary. arXiv
preprint arXiv:2006.07985.
Wiratunga, N., Wijekoon, A., Nkisi-Orji, I., Martin, K.,
Palihawadana, C., and Corsar, D. (2021). Actionable
feature discovery in counterfactuals using feature rel-
evance explainers. CEUR Workshop Proceedings.
Yang, M. and Kim, B. (2019). Benchmarking attribu-
tion methods with relative feature importance. arXiv
preprint arXiv:1907.09701.
Zar, J. H. (2005). Spearman rank correlation. Encyclopedia
of biostatistics, 7.
ICPRAM 2024 - 13th International Conference on Pattern Recognition Applications and Methods
56