5 IMPLICATIONS AND
CONCLUSIONS
This study was undertaken as an SMS to investigate
the interpretability of black-box models. In this study,
an automatic search was performed in six digital
libraries. A total of 179 papers published between
1994 and 2020 were qualified for the investigation of
interpretability in the medical field.
Different sources were identified for future
publications, which could be useful for researchers.
Most of the qualified papers proposed a solution
along with its evaluation (usually HBE), which shows
the huge interest in debunking interpretability as well
as the high maturity of the community. Nevertheless,
researchers are encouraged to attempt to validate their
proposals or evaluations in real-world scenarios (e.g.,
clinics, hospitals) by implementing their proposed
ensembles in a decision support system. As to ML
techniques, ANNs were the most appealing black-box
technique for investigating interpretability. More
efforts should be put into interpreting SVM/SVR
models and tree ensembles because they are widely
used as ANNs.
To use ML efficiently in domains such as
medicine, the entire community should break down
the barrier of interpretability, which will solve the
bottleneck of lack of ML transparency.
Figure 7: Distribution of ANNs types.
ACKNOWLEDGEMENTS
The authors would like to thank the Moroccan
Ministry of Higher Education and Scientific
Research, and CNRST.
REFERENCES
Augasta, M. G., & Kathirvalavakumar, T. (2012). Rule
extraction from neural networks—A comparative
study. International Conference on Pattern
Recognition, Informatics and Medical Engineering
(PRIME-2012). Salem, India, 404–408. Retrieved from
https://ieeexplore.ieee.org/abstract/document/6208380/
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J.,
Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-
Lopez, S., Molina, D., Benjamins, R., Chatila, R., &
Herrera, F. (2020). Explainable Explainable Artificial
Intelligence (XAI): Concepts, taxonomies,
opportunities and challenges toward responsible AI.
Information Fusion, 58, 82–115. doi: 10.1016/
j.inffus.2019.12.012
Chia, H. W. K., Tan, C. L., & Sung, S. Y. (2006).
Enhancing knowledge discovery via association-based
evolution of neural logic networks. IEEE Transactions
on Knowledge and Data Engineering, 18(7), 889–901.
doi: 10.1109/TKDE.2006.111
Chuan Chen, Youqing Chen, & Junbing He. (2006). Neural
Network Ensemble Based Ant Colony Classification
Rule Mining. First International Conference on
Innovative Computing, Information and Control -
Volume I (ICICIC’06), 3, 427–430. doi: 10.1109/
ICICIC.2006.477
Fan, Y., Li, D., Liu, Y., Feng, M., Chen, Q., & Wang, R.
(2020). Toward better prediction of recurrence for
Cushing’s disease: a factorization-machine based
neural approach. International Journal of Machine
Learning and Cybernetics 2020 12:3, 12(3), 625–633.
doi: 10.1007/S13042-020-01192-6
Hakkoum, H., Idri, A., & Abnane, I. (2021). Assessing and
Comparing Interpretability Techniques for Artificial
Neural Networks Breast Cancer Classification.
Https://Doi.Org/10.1080/21681163.2021.1901784.
doi: 10.1080/21681163.2021.1901784
Higgins, J., & Green, S. (2009). Cochrane Handbook for
Systematic Reviews of Interventions. Retrieved from
http://handbook-5-1.cochrane.org/v5.0.2/
Hosni, M., Abnane, I., Idri, A., Carrillo de Gea, J. M., &
Fernández Alemán, J. L. (2019). Reviewing ensemble
classification methods in breast cancer. In Computer
Methods and Programs in Biomedicine (Vol. 177, pp.
89–112). Elsevier Ireland Ltd. doi: 10.1016/
j.cmpb.2019.05.019
Hulstaert., L. (2020). Black-box vs. white-box models.
Retrieved from https://towardsdatascience.com/
machine-learning-interpretability-techniques-
662c723454f3
Johansson, U., & Niklasson, L. (2009). Evolving decision
trees using oracle guides. 2009 IEEE Symposium on
Computational Intelligence and Data Mining, CIDM
2009 - Proceedings, 238–244. doi: 10.1109/
CIDM.2009.4938655
Kim, B., Khanna, R., & Koyejo, O. (2016). Examples are
not Enough, Learn to Criticize! Criticism for
Interpretability.