Moreover, explanations are not limited to speech
– they can include images, videos, body language,
live demonstration, and more. Overall, generating
explanations tailored to particular humans is a
difficult task. However, as with all other aspects of
cognitive modeling, simplified solutions hold
promise to be useful, particularly given the well-
established fact that adding more content to an
explanation does not necessarily make it better (cf.
the discussion of decision-making heuristics in
Kahneman, 2011).
ACKNOWLEDGEMENTS
This research was supported in part by Grant
#N00014-23-1-2060 from the U.S. Office of Naval
Research. Any opinions or findings expressed in this
material are those of the authors and do not
necessarily reflect the views of the Office of Naval
Research.
REFERENCES
Babic, B., Gerke, S., Evgeniou, T., Cohen, I. G. (2021).
Beware explanations from AI in health care. Science,
373(6552), 284–286.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., et al.
(2020). Explainable artificial intelligence (XAI):
Concepts, taxonomies, opportunities and challenges
toward responsible AI. Information Fusion, 548: 82-
115.
Boden, M. (2006). Minds as Machines. Oxford University
Press.
Bodria, F., Giannotti, F., Guidotti, R., Naretto, F.,
Pedreschi, D., Rinzivillo, S. (2021). Benchmarking and
survey of explanation methods for black box models.
Arxiv:2102.13076v1.
Cambria, E., L. Malandri, F. Mercorio, M. Mezzanzanica,
and N. Npbani. 2023. A Survey on XAI and natural
language explanations. Information Processing and
Management, 60, 103-111.
Chan, S., Siegel, E. L. (2019). Will machine learning end
the viability of radiology as a thriving medical
specialty? British Journal of Radiology, 92(1094).
https://doi.org /10.1259/bjr.20180416
Craik, K. J. W. (1943), The Nature of Explanation.
Cambridge University Press.
De Graaf, M. M. A., A. Dragan, B. F. Malle, and T. Ziemke.
2021. Introduction to the Special Issue on Explainable
Robotic Systems. J. Hum.-Robot Interact. 10, 3, Article
22. https://doi.org/10.1145/3461597
Ehsan, U., Wintersberger, P., Liao, Q. V., Watkins, E. A.,
Manger, C., Daumé, Hal, III, Riener, A., Riedl, M. O.
(2022). Human-centered explainable AI (HCXAI):
Beyond opening the black-box of AI. CHI EA ’22:
Extended abstracts of the 2022 CHI Conference on
Human Factors in Computing Systems, pp. 1–7.
Association for Computing Machinery.
https://doi.org/10.1145/3491101.3503727
Finzel, B., Schwalbe, G. (2023). A comprehensive
taxonomy for explainable artificial intelligence: a
systematic survey of surveys on methods and concepts.
Data Mining and Knowledge Discovery.
https://doi.org/10.1007/s10618-022-00867-8
Gunning, D. (2017). Explainable artificial intelligence
(XAI). DARPA/I20. Program Update November 2017.
Hempel, C. G. (1965). Aspects of scientific explanation. In
Hempel, C. G. (1965), Aspects of Scientific Explanation
and Other Essays in the Philosophy of Science. Free
Press, pp. 331– 396.
Hitzler, P., M.K. Sarker and A. Eberhart (eds). 2023.
Compendium of Neurosymbolic Artificial Intelligence.
Frontiers in Artificial Intelligence and Applications,
vol. 369. IOS Press.
Kahneman, D. (2011). Thinking: Fast and slow. Farrar,
Strauss and Giroux.
Liao, V. Q., Varshney, K. R. (2022). Human-centered
explainable AI (XAI): From algorithms to user
experiences. arXiv:2110.10790.
Lombrozo, T. (2010). Causal–explanatory pluralism: How
intentions, functions, and mechanisms influence causal
ascriptions. Cognitive Psychology, 61(4): 303–332.
doi:10.1016/j.cogpsych.2010.05.002
Marcus, G. (2022, March 10). Deep learning is hitting a
wall. Nautilus. https://nautil .us/deep-learning-is-
hitting-a-wall-14467
Matzkin, A. (2021, September 29). AI in Healthcare:
Insights from two decades of FDA approvals. Health
Advances blog.
McShane, M., Jarrell, B., Fantry, G., Nirenburg, S., Beale,
S., Johnson, B. (2008). Revealing the conceptual
substrate of biomedical cognitive models to the wider
community. In J. D. Westwood, R. S. Haluck, H. M.
Hoffman, G. T. Mogel, R. Phillips, R. A. Robb, & K.
G. Vosburgh (Eds.), Medicine meets virtual reality 16:
Parallel, combinatorial, convergent: NextMed by
design (pp. 281–286). IOS Press.
McShane, M., Nirenburg. S. (2021). Linguistics for the age
of AI. MIT Press. Available, open access, at
https://direct.mit.edu/books/book/5042/Linguistics-
for-the-Age-of-AI.
McShane, M., Nirenburg, S., English, J. (2024). Agents in
the Long Game of AI: Computational cognitive
modeling for trustworthy, hybrid AI. MIT Press.
Minsky, M. 1961. Steps Toward Artificial Intelligence.
Proceedings of the Institute of Radio Engineers, 49: 8–
30; reprinted in Feigenbaum and Feldman, (eds.)
Computers and Thought. McGraw-Hill. 1963.
Minsky. M. (2006). The Emotion Machine. Simon and
Schuster.
Mueller, S. T., Hoffman, R. R., Clancey, W., Emrey, A., &
Klein, G. (2019). Explanation in human-AI systems: A
literature meta-review. Synopsis of key ideas and pub-
lications, and bibliography for explainable AI.