Can We Trust Explanation! Evaluation of Model-Agnostic Explanation Techniques on Highly Imbalanced, Multiclass-Multioutput Classification Problem
Syed Ihtesham Hussain Shah, Annette Ten Teije, José Volders
2025
Abstract
Explainable AI (XAI) assist clinicians and researcher in understanding the rationale behind the predictions made by data-driven models which helps them to make informed decisions and trust the model’s outputs. Providing accurate explanations for breast cancer treatment predictions in the context of highly imbalanced, multiclass-multioutput classification problem is extremely challenging. The aim of this study is to perform a comprehensive and detailed analysis of the explanations generated by post-hoc explanatory methods: Local Interpretable Model-agnostic Explanation (LIME) and SHaply Additive exPlanations (SHAP) for breast cancer treatment prediction using highly imbalanced oncologycal dataset. We introduced evaluation matrices including consistency, fidelity, alignment with established clinical guidelines and qualitative analysis to evaluate the effectiveness and faithfulness of these methods. By examining the strengths and limitations of LIME and SHAP, we aim to determine their suitability for supporting clinical decision making in multifaceted treatments and complex scenarios. Our findings provide important insights into the use of these explanation methods, highlighting the importance of transparent and robust predictive models. This experiment showed that SHAP perform better than LIME in term of fidelity and by providing more stable explanation that are better aligned with medical guidelines. This work provides guidance to practitioners and model developers in selecting the most suitable explanation technique to promote trust and enhance understanding in predictive healthcare models.
DownloadPaper Citation
in Harvard Style
Shah S., Teije A. and Volders J. (2025). Can We Trust Explanation! Evaluation of Model-Agnostic Explanation Techniques on Highly Imbalanced, Multiclass-Multioutput Classification Problem. In Proceedings of the 18th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: HEALTHINF; ISBN 978-989-758-731-3, SciTePress, pages 530-539. DOI: 10.5220/0013157400003911
in Bibtex Style
@conference{healthinf25,
author={Syed Shah and Annette Teije and José Volders},
title={Can We Trust Explanation! Evaluation of Model-Agnostic Explanation Techniques on Highly Imbalanced, Multiclass-Multioutput Classification Problem},
booktitle={Proceedings of the 18th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: HEALTHINF},
year={2025},
pages={530-539},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013157400003911},
isbn={978-989-758-731-3},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 18th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: HEALTHINF
TI - Can We Trust Explanation! Evaluation of Model-Agnostic Explanation Techniques on Highly Imbalanced, Multiclass-Multioutput Classification Problem
SN - 978-989-758-731-3
AU - Shah S.
AU - Teije A.
AU - Volders J.
PY - 2025
SP - 530
EP - 539
DO - 10.5220/0013157400003911
PB - SciTePress