
Kiefer, S. (2022). Case: Explaining text classifications by
fusion of local surrogate explanation models with con-
textual and semantic knowledge. Information Fusion,
77:184–195.
Kliegr, T.,
ˇ
St
ˇ
ep
´
an Bahn
´
ık, and F
¨
urnkranz, J. (2021). A re-
view of possible effects of cognitive biases on interpre-
tation of rule-based machine learning models. Artificial
Intelligence, 295:103458.
Lai, V., Zhang, Y., Chen, C., Liao, Q. V., and Tan, C.
(2023). Selective explanations: Leveraging human in-
put to align explainable AI. Proc. ACM Hum.-Comput.
Interact., 7(CSCW2).
Li, Y., Bandar, Z., and Mclean, D. (2003). An approach for
measuring semantic similarity between words using mul-
tiple information sources. IEEE Transactions on Knowl-
edge and Data Engineering, 15(4):871–882.
Linardatos, P., Papastefanopoulos, V., and Kotsiantis, S.
(2021). Explainable ai: A review of machine learning
interpretability methods. Entropy, 23(1).
Mashayekhi, M. and Gras, R. (2015). Rule extraction from
random forest: the rf+hc methods. In Barbosa, D. and
Milios, E., editors, Advances in Artificial Intelligence,
pages 223–237, Cham. Springer International Publish-
ing.
Miller, T. (2019). Explanation in artificial intelligence: In-
sights from the social sciences. Artificial Intelligence,
267:1–38.
M
¨
uller, S., Toborek, V., Beckh, K., Jakobs, M., Bauckhage,
C., and Welke, P. (2023). An empirical evaluation of
the rashomon effect in explainable machine learning. In
Koutra, D., Plant, C., Rodriguez, M. G., Baralis, E., and
Bonchi, F., editors, Proceedings of the European Con-
ference on Machine Learning and Knowledge Discovery
in Databases (ECML-PKDD): Research Track, Part III,
pages 462–478, Turin, Italy. Springer.
Rada, R., Mili, H., Bicknell, E., and Blettner, M. (1989).
Development and application of a metric on semantic
nets. IEEE Transactions on Systems, Man, and Cyber-
netics, 19(1):17–30.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”Why
should I trust you?”: Explaining the predictions of any
classifier. In Krishnapuram, B., Shah, M., Smola, A. J.,
Aggarwal, C. C., Shen, D., and Rastogi, R., editors,
Proceedings of the 22nd ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining
(KDD), pages 1135–1144, San Francisco, CA, USA.
ACM.
Samek, W. and M
¨
uller, K.-R. (2019). Towards Explain-
able Artificial Intelligence. Explainable AI: Interpreting,
Explaining and Visualizing Deep Learning. CRC Press,
Cham. ISBN 978-3-030-28954-6.
Sanders, T. (1997). Semantic and pragmatic sources of co-
herence: On the categorization of coherence relations in
context. Discourse Processes, 24(1):119–147.
Skusa, M. (2006). Semantic coherence in software engi-
neering. In ICEIS Doctoral Consortium, Proceedings
of the 4th ICEIS Doctoral Consortium, DCEIS 2006,
In conjunction with ICEIS 2006, Paphos, Cyprus, May
2006, pages 118–129. ICEIS Press.
Souza, V. F., Cicalese, F., Laber, E., and Molinaro, M.
(2022). Decision trees with short explainable rules. In
Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D.,
Cho, K., and Oh, A., editors, Advances in Neural In-
formation Processing Systems, volume 35, pages 12365–
12379. Curran Associates, Inc.
Suffian, M., Stepin, I., Alonso-Moral, J. M., and Bogliolo,
A. (2023). Investigating human-centered perspectives in
explainable artificial intelligence. In CEUR Workshop
Proceedings, volume 3518, pages 47–66. CEUR-WS.
S
´
anchez, D., Batet, M., Isern, D., and Valls, A. (2012).
Ontology-based semantic similarity: A new feature-
based approach. Expert Systems with Applications,
39(9):7718–7728.
Vakulenko, S., de Rijke, M., Cochez, M., Savenkov, V., and
Polleres, A. (2018). Measuring semantic coherence of a
conversation. In The Semantic Web - ISWC 2018 - 17th
International Semantic Web Conference, Monterey, CA,
USA, October 8-12, 2018, Proceedings, Part I, volume
11136 of Lecture Notes in Computer Science, pages 634–
651. Springer.
Wang, X. and Yin, M. (2021). Are explanations helpful? a
comparative study of the effects of explanations in AI-
assisted decision-making. In Proceedings of the 26th
International Conference on Intelligent User Interfaces,
IUI ’21, page 318–328, New York, NY, USA. Associa-
tion for Computing Machinery.
Wu, Z. and Palmer, M. (1994). Verbs semantics and lexi-
cal selection. In Proceedings of the 32nd Annual Meet-
ing on Association for Computational Linguistics, ACL
’94, pages 133–138, USA. Association for Computa-
tional Linguistics.
Xie, Y., Chen, M., Kao, D., Gao, G., and Chen, X. A.
(2020). CheXplain: Enabling physicians to explore
and understand data-driven, AI-enabled medical imaging
analysis. In Proceedings of the 2020 CHI Conference on
Human Factors in Computing Systems, CHI ’20, page
1–13, New York, NY, USA. Association for Computing
Machinery.
Zhou, Y. and Hooker, G. (2016). Interpreting models via
single tree approximation. arXiv: Methodology.
Zhu, G. and Iglesias, C. A. (2017). Computing semantic
similarity of concepts in knowledge graphs. IEEE Trans-
actions on Knowledge and Data Engineering, 29(1):72–
85.
IAI 2025 - Special Session on Interpretable Artificial Intelligence Through Glass-Box Models
908