Authors:
Santiago Marro
;
Benjamin Molinet
;
Elena Cabrio
and
Serena Villata
Affiliation:
Université Côte d’Azur, Inria, CNRS, I3S, France
Keyword(s):
Natural Language Processing, Information Extraction, Argument-based Natural Language Explanations, Healthcare.
Abstract:
The automatic generation of explanations to improve the transparency of machine predictions is a major challenge in Artificial Intelligence. Such explanations may also be effectively applied to other decision making processes where it is crucial to improve critical thinking in human beings. An example of that consists in the clinical cases proposed to medical residents together with a set of possible diseases to be diagnosed, where only one correct answer exists. The main goal is not to identify the correct answer, but to be able to explain why one is the correct answer and the others are not. In this paper, we propose a novel approach to generate argument-based natural language explanations for the correct and incorrect answers of standardized medical exams. By combining information extraction methods from heterogeneous medical knowledge bases, we propose an automatic approach where the symptoms relevant to the correct diagnosis are automatically extracted from the case, to build a
natural language explanation. To do so, we annotated a new resource of 314 clinical cases, where 1843 different symptoms are identified. Results in retrieving and matching the relevant symptoms for the clinical cases to support the correct diagnosis and contrast incorrect ones outperform standard baselines.
(More)