Authors:
Gemma Bel Bordes
1
and
Alan Perotti
2
Affiliations:
1
Utrecht University, Netherlands
;
2
CENTAI, Turin, Italy
Keyword(s):
Medical Imaging, Computer Vision, Explainable Artificial Intelligence, Fairness.
Abstract:
Advancements in Artificial Intelligence have produced several tools that can be used in medical decision support systems. However, these models often exhibit the so-called ’black-box problem’: an algorithmic diagnosis is produced, but no human-understandable details about the decision process can be obtained. This raises critical questions about fairness and explainability, crucial for equitable healthcare. In this paper we focus on chest X-ray image classification, auditing the reproducibility of previous results in terms of model bias, exploring the applicability of Explainable AI (XAI) techniques, and auditing the fairness of the produced explanations. We highlight the challenges in assessing the quality of explanations provided by XAI methods, particularly in the absence of ground truth. In turn, this strongly hampers the possibility of comparing explanation quality across patients sub-groups, which is a cornerstone in fairness audits. Our experiments illustrate the complexities
in achieving transparent AI interpretations in medical diagnostics, underscoring the need both for reliable XAI techniques and more robust fairness auditing methods.
(More)