for which, having accurate information at the right
time is essential. However, it is well-known that
humans suffer from several decision biases
(Kahneman & Klein, 2009) that may lead to wrong
decisions, particularly in emergency situations, or
when the decision-makers lack experience, expertise
or time to come up with an analytical decision.
XAI has a huge potential in supporting aviation
operators by mitigating decision errors. However, to
achieve that, it has to provide the information in a
self-explanatory way so operators can comprehend
the situation and the consequences or their decisions.
Therefore, we expect our methodology will support
describing how to make and what is the necessary
content of a good explanation.
6 CONCLUSIONS
We proposed an ontology summarizing the main
elements for XAI in aviation. In this ontology, we
acknowledge the dialectical dimension of
explanations (Walton, 2004), and we frame it upon
speech acts theories. We also acknowledge that
discourse theories are relevant for understanding the
rhetorical structure of explanations. In particular, an
explanation is understood as a logical structure with
three terms: (1) an Explanandum, which is an aspect
of the outcomes of the system of interest, (2) the
Explanans, and (3) the discourse relation which links
the Explanans and the Explanandum.
Our main contribution is to enhance the role of
discourse relations to make explanations successful.
The concepts defined here will serve as foundations
for explainability requirements in the early phases of
systems development. Good quality explainability
requirements should translate cognitive needs or
concerns into implementable and verifiable design
principles. Those requirements will be the point of
contact between practitioners in charge of capturing
the cognitive needs of the operators, and engineers in
charge of designing the system. Our assumption is
that explainability requirements will be
implementable and verifiable if they rest on the
logical structure of information, as managed by the
system whose outcomes or recommendations are to
be explained. On the theoretical side, a lot of work
remains to be done. In particular, building the map
between a taxonomy of cognitive needs to be fulfilled
through explainability and the corresponding types of
explanation. For this, a further formalization effort
might be needed regarding the taxonomy of discourse
relations and their semantics.
REFERENCES
Asher, N., Lascarides, A., 2003, Logics of Conversation.
Cambridge University Press.
Boella, G., Damiano, R., Hulstijn, J., van der Torre, L.,
2007, A common ontology of agent communication
languages: Modeling mental attitudes and social
commitments using roles. Applied Ontology. 2. 217-
265.
Borgo, S. , Ferrario, R., Gangemi, A., Guarino, N., Masolo,
C., Porello, D., Sanfilippo, E. M., & Vieu, L., 2022,
DOLCE: A descriptive ontology for linguistic and
cognitive engineering. Applied ontology, 17(1), 45-69.
Chari, S., Seneviratne, O., Gruen, D.M., Foreman, M.A.,
Das, A.K., McGuinness, D.L., 2020. Explanation
Ontology: A Model of Explanations for User-Centered
AI. In: J. Z. Pan et al. The Semantic Web – ISWC 2020.
ISWC 2020. Lecture Notes in Computer Science, vol
12507. Springer
Collins, 2024. Collins Dictionary entry on "explanation",
consulted on line in April 2024.
Druce, J., Niehaus, J., Moody, V., Jensen, D., & Littman,
M. L., 2021. Brittle AI, causal confusion, and bad
mental models: challenges and successes in the XAI
program. arXiv preprint arXiv:2106.05506.
EASA, 2023a. Concept Paper, First usable guidance for
Level 1&2 machine learning applications, European
Union Aviation Safety Agency, issue 2, Feb 2023.
EASA, 2023b. ARTIFICIAL INTELLIGENCE
ROADMAP 2.0. Human-centric approach to AI in
aviation, European Union Aviation Safety Agency,
May 2023.
Endsley, M. R. (1995). Toward a theory of situation
awareness in dynamic systems. Human factors, 37(1),
32-64.
Endsley, M. R., 2023. Supporting Human-AI Teams:
Transparency, explainability, and situation awareness.
Computers in Human Behavior, 140, 107574.
Ferrario, R., Prévot, L., 2007. Formal ontologies for
communicating agents. Applied Ontology. 2. 209-216.
Green, M., 2021. Speech Acts. In E. N. Zalta (ed.), The
Stanford Encyclopedia of Philosophy
Hovy, E.H., Maier, E., 1992. Parsimonious or profligate:
how many and which discourse structure relations?
University of Southern California. Information
Sciences Institute, ISI Research report.
Kahneman, D., & Klein, G. (2009). Conditions for intuitive
expertise: a failure to disagree. American psychologist,
64(6), 515.
Keller, R. M. (2016) Ontologies for aviation data
management, 2016 IEEE/AIAA 35th Digital Avionics
Systems Conference (DASC), Sacramento, CA, USA,
2016, pp. 1-9
Lindner, F., 2020. Towards a Formalization of
Explanations for Robots' Actions and Beliefs.
Proceedings of "RobOntics: International Workshop on
Ontologies for Autonomous Robotics", JOWO 2020
Mann, W., Thompson, S., 1988. Rhetorical Structure
Theory: Toward a functional theory of text
organization. Text. 8. 243-281.