
Alonso, J. M. and Bugar
´
ın, A. (2019). ExpliClas: Au-
tomatic Generation of Explanations in Natural Lan-
guage for Weka Classifiers. In 2019 IEEE Interna-
tional Conference on Fuzzy Systems (FUZZ-IEEE),
pages 1–6. IEEE.
Baaj, I. (2022). Explainability of possibilistic and fuzzy
rule-based systems. Theses, Sorbonne Universit
´
e.
Berman, A. (2024a). Argumentative Dialogue As Basis For
Human-AI Collaboration. In Proceedings of HHAI
2024 Workshops.
Berman, A. (2024b). Too Far Away from the Job Mar-
ket – Says Who? Linguistically Analyzing Rationales
for AI-based Decisions Concerning Employment Sup-
port. Weizenbaum Journal of the Digital Society, 4(3).
Breitholtz, E. (2020). Enthymemes and Topoi in Dialogue:
the use of common sense reasoning in conversation.
Brill.
Eemeren, F. H., Garssen, B., and Labrie, N. (2021). Argu-
mentation between Doctors and Patients. Understand-
ing clinical argumentative discourse. John Benjamins
Publishing Company.
Forrest, J., Sripada, S., Pang, W., and Coghill, G. (2018).
Towards making NLG a voice for interpretable ma-
chine learning. In Proceedings of The 11th Interna-
tional Natural Language Generation Conference. As-
sociation for Computational Linguistics (ACL).
Fritzell, P., Mesterton, J., and Hagg, O. (2022). Prediction
of outcome after spinal surgery—using The Dialogue
Support based on the Swedish national quality regis-
ter. European Spine Journal, pages 1–12.
Grice, H. P. (1975). Logic and conversation. Syntax and
semantics, 3:43–58.
Gulbrandsen, P., Finset, A., and Jensen, B. (2013). Lege-
pasient-korpus fra Ahus.
Kaczmarek-Majer, K., Casalino, G., Castellano, G., Do-
miniak, M., Hryniewicz, O., Kami
´
nska, O., Vessio,
G., and D
´
ıaz-Rodr
´
ıguez, N. (2022). Plenary: Explain-
ing black-box models in natural language through
fuzzy linguistic summaries. Information Sciences,
614:374–399.
Lindgren, S. and Aspegren, K. (2004). Kliniska f
¨
ardigheter:
informationsutbytet mellan patient och l
¨
akare. Stu-
dentlitteratur AB.
Lundberg, S. M. and Lee, S.-I. (2017). A unified ap-
proach to interpreting model predictions. In Guyon, I.,
Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R.,
Vishwanathan, S., and Garnett, R., editors, Advances
in Neural Information Processing Systems 30, pages
4765–4774. Curran Associates, Inc.
Maraev, V., Breitholtz, E., Howes, C., and Bernardy, J.-
P. (2021). Why should I turn left? Towards active
explainability for spoken dialogue systems. In Pro-
ceedings of the Reasoning and Interaction Conference
(ReInAct 2021), pages 58–64.
Miller, T. (2019). Explanation in artificial intelligence: In-
sights from the social sciences. Artificial Intelligence,
267:1–38.
Miller, T. (2023). Explainable AI is Dead, Long Live Ex-
plainable AI! Hypothesis-driven Decision Support us-
ing Evaluative AI. In Proceedings of the 2023 ACM
Conference on Fairness, Accountability, and Trans-
parency, FAccT ’23, page 333–342, New York, NY,
USA. Association for Computing Machinery.
Pantanowitz, L., Pearce, T., Abukhiran, I., Hanna, M.,
Wheeler, S., Soong, T. R., Tafti, A. P., Pantanowitz,
J., Lu, M. Y., Mahmood, F., Gu, Q., and Rashidi,
H. H. (2024). Nongenerative Artificial Intelligence in
Medicine: Advancements and Applications in Super-
vised and Unsupervised Machine Learning. Modern
Pathology, page 100680.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”Why
Should I Trust You?”: Explaining the Predictions
of Any Classifier. In Proceedings of the 22nd
ACM SIGKDD International Conference on Knowl-
edge Discovery and Data Mining, pages 1135–1144.
Rudin, C. (2019). Stop explaining black box machine learn-
ing models for high stakes decisions and use inter-
pretable models instead. Nature Machine Intelligence,
1(5):206–215.
Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L.,
and Zhong, C. (2022). Interpretable machine learn-
ing: Fundamental principles and 10 grand challenges.
Statistics Surveys, 16(none):1 – 85.
Sbis
`
a, M. (1987). Acts of explanation: A speech act anal-
ysis. Argumentation: Perspectives and approaches,
pages 7–17.
Slack, D., Krishna, S., Lakkaraju, H., and Singh, S. (2023).
Explaining machine learning models with interactive
natural language conversations using TalkToModel.
Nature Machine Intelligence, 5(8):873–883.
Toulmin, S. E. (2003). The uses of argument. Cambridge
university press.
Wahde, M. and Virgolin, M. (2023). DAISY: An Implemen-
tation of Five Core Principles for Transparent and Ac-
countable Conversational AI. International Journal of
Human–Computer Interaction, 39(9):1856–1873.
Winograd, T. (1971). Procedures as a representation for
data in a computer program for understanding natu-
ral language. PhD thesis, Massachusetts Institute of
Technology.
Xydis, A., Hampson, C., Modgil, S., and Black, E. (2020).
Enthymemes in dialogues. In Computational Models
of Argument, pages 395–402. IOS Press.
IAI 2025 - Special Session on Interpretable Artificial Intelligence Through Glass-Box Models
920