
underscores a robust interpretation of the model’s
workings, further validating the relevance of the iden-
tified features as reflections of underlying patterns
rather than artifacts of specific methods.
Additionally, the commons feature’s im-
pact—whether positive or negative—reinforces
the reliability of these features in influencing the
model’s outcomes. This convergence of results is
significant for practitioners, indicating that both inter-
pretability methods provide a similar understanding
of the model, enabling more precise insights for
decision-making.
ACKNOWLEDGMENTS
The author acknowledges the contribution of Ms.
Irene Wanyana, Dr. Isunju JohnBosco and Dr. Kiberu
Vincent who supervised part of this research while the
author was completing a Master Program at Makerere
University and Vladimir Estivill-Castro as current ad-
visor during the author’s PhD program.
REFERENCES
Aldughayfiq, B., Ashfaq, F., Jhanjhi, N., and Humayun, M.
(2023). Explainable ai for retinoblastoma diagnosis:
interpreting deep learning models with lime and shap.
Diagnostics, 13(11):1932.
Bitew, F. H., Sparks, C. S., and Nyarko, S. H. (2022). Ma-
chine learning algorithms for predicting undernutri-
tion among under-five children in ethiopia. Public
health nutrition, 25(2):269–280.
Casalicchio, G., Molnar, C., and Bischl, B. (2019). Visu-
alizing the feature importance for black box models.
In Proc. P. I Machine Learning and Knowledge Dis-
covery in Databases: European Conf., ECML PKDD
2018, pages 655–670. Springer.
Di Martino, F., Delmastro, F., and Dolciotti, C. (2023). Ex-
plainable ai for malnutrition risk prediction from m-
health and clinical data. Smart Health, 30:100429.
ElShawi, R., Sherif, Y., Al-Mallah, M., and Sakr, S. (2021).
Interpretability in healthcare: A comparative study
of local machine learning interpretability techniques.
Computational Intelligence, 37(4):1633–1650.
Gadekallu, T. R., Iwendi, C., Wei, C., and Xin, Q. (2021).
Identification of malnutrition and prediction of bmi
from facial images using real-time image processing
and machine learning. IET Image Process, 16:647–
658.
Islam, M. M., Rahman, M. J., Islam, M. M., Roy, D. C.,
Ahmed, N. F., Hussain, S., Amanullah, M., Abedin,
M. M., and Maniruzzaman, M. (2022). Application
of machine learning based algorithm for prediction of
malnutrition among women in bangladesh. Int. J. of
Cognitive Computing in Engineering, 3:46–57.
Kikafunda, J. K., Walker, A. F., Collett, D., and Tumwine,
J. K. (1998). Risk factors for early childhood malnu-
trition in Uganda. Pediatrics, 102(4):e45–e45.
Kumar, A., Chirag, Y., Kodipalli, A., and Rao, T. (2024).
Anemia detection and severity prediction using clas-
sification algorithms with optimized hyperparameters,
boosting techniques and xai. In 2024 5th Int. Conf. for
Emerging Technology (INCET), pages 1–5. IEEE.
Lee, E., Braines, D., Stiffler, M., Hudler, A., and Harborne,
D. (2019). Developing the sensitivity of lime for bet-
ter machine learning explanation. In Artificial intelli-
gence and machine learning for multi-domain opera-
tions applications, v. 11006, pages 349–356. SPIE.
Lundberg, S. M. and Lee, S.-I. (2017). A unified ap-
proach to interpreting model predictions. In Proc. 31st
Int. Conf. on Neural Information Processing Systems,
NIPS, page 4768–4777, Red Hook, NY, USA. Curran
Associates.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer,
P., Weiss, R., Dubourg, V., et al. (2011). Scikit-
learn: Machine learning in python. Journal of ma-
chine learning research, 12(Oct):2825–2830.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”Why
should I trust you?”: Explaining the predictions of any
classifier. In Proc. 22nd ACM SIGKDD Int. Conf. on
Knowledge Discovery and Data Mining, KDD, page
1135–1144, NY, USA. ACM.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2018). An-
chors: High-precision model-agnostic explanations.
In Proc. 32nd AAAI Conf. on artificial intelligence.
AAAI Press.
Rodr
´
ıguez-P
´
erez, R. and Bajorath, J. (2019). Interpreta-
tion of compound activity predictions from complex
machine learning models using local approximations
and shapley values. Journal of medicinal chemistry,
63(16):8761–8777.
Sermet-Gaudelus, I., Poisson-Salomon, A.-S., Colomb, V.,
Brusset, M.-C., Mosser, F., Berrier, F., and Ricour, C.
(2000). Simple pediatric nutritional risk score to iden-
tify children at risk of malnutrition. The American
journal of clinical nutrition, 72(1):64–70.
Talukder, A. and Ahammed, B. (2020). Machine learning
algorithms for predicting malnutrition among under-
five children in bangladesh. Nutrition, 78:110861.
WHO (2007). World health organisation application
tools. https://www.who.int/tools/growth-reference-
data-for-5to19-years/application-tools. Accessed:
2023-11-06.
Zhang, Y., Song, K., Sun, Y., Tan, S., and Udell, M. (2019).
” why should you trust my explanation?” understand-
ing uncertainty in lime explanations. arXiv preprint
arXiv:1904.12991.
ICPRAM 2025 - 14th International Conference on Pattern Recognition Applications and Methods
298