REFERENCES
Alipoor, G., Mirbagheri, S., Moosavi, S., and Cruz, S.
(2022). Incipient detection of stator inter-turn short-
circuit faults in a doubly-fed induction generator using
deep learning. IET Electric Power Applications.
Bastani, O., Kim, C., and Bastani, H. (2017). Interpret-
ing blackbox models via model extraction. ArXiv,
abs/1705.08504.
Bishop, C. (1995). Neural Networks for Pattern Recogni-
tion. Oxford University Press.
Chamlal, H., Ouaderhman, T., and Rebbah, F. (2022). A hy-
brid feature selection approach for microarray datasets
using graph theoretic-based method. Information Sci-
ences, 615:449–474.
Cover, T. and Thomas, J. (2006). Elements of information
theory. John Wiley & Sons, second edition.
Dhal, P. and Azad, C. (2022). A comprehensive survey on
feature selection in the various fields of machine learn-
ing. Applied Intelligence, 52(4):4543–45810.
Gan, J. Q., Awwad Shiekh Hasan, B., and Tsui, C. S. L.
(2014). A filter-dominating hybrid sequential forward
floating search method for feature subset selection in
high-dimensional space. International Journal of Ma-
chine Learning and Cybernetics, 5(3):413–423.
Guyon, I. and Elisseeff, A. (2003). An introduction to vari-
able and feature selection. Journal of Machine Learn-
ing Research (JMLR), 3:1157–1182.
Guyon, I., Gunn, S., Nikravesh, M., and Zadeh (Editors), L.
(2006). Feature extraction, foundations and applica-
tions. Springer.
Hanif, A., Zhang, X., and Wood, S. (2021). A survey
on explainable artificial intelligence techniques and
challenges. In IEEE 25th International Enterprise
Distributed Object Computing Workshop (EDOCW),
pages 81–89.
Huynh-Cam, T.-T., Nalluri, V., Chen, L.-S., and Yang, Y.-
Y. (2022). IS-DT: A new feature selection method for
determining the important features in programmatic
buying. Big Data and Cognitive Computing, 6(4).
Jeon, Y. and Hwang, G. (2023). Feature selection with
scalable variational gaussian process via sensitivity
analysis based on L2 divergence. Neurocomputing,
518:577–592.
Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J.,
Vi
´
egas, F., and Sayres, R. (2018). Interpretability
beyond feature attribution: Quantitative testing with
concept activation vectors (tcav). In Dy, J. G. and
Krause, A., editors, ICML, volume 80 of Proceedings
of Machine Learning Research, pages 2673–2682.
PMLR.
Lakkaraju, H. and Bastani, O. (2020). How do I fool you?
manipulating user trust via misleading black box ex-
planations. Proceedings of the AAAI/ACM Conference
on AI, Ethics, and Society, pages 79–85.
Lou, Y., Caruana, R., Gehrke, J., and Hooker, G. (2013).
Accurate intelligible models with pairwise interac-
tions. In Dhillon, I. S., Koren, Y., Ghani, R., Sen-
ator, T. E., Bradley, P., Parekh, R., He, J., Gross-
man, R. L., and Uthurusamy, R., editors, The 19th
ACM SIGKDD International Conference on Knowl-
edge Discovery and Data Mining, KDD, Chicago, IL,
USA, pages 623–631. ACM.
Moorthy, K. and Mohamad, M. (2011). Random forest
for gene selection and microarray data classification.
Bioinformation, 7(3):142–146.
Mothilal, R., Sharma, A., and Tan, C. (2020). Explain-
ing machine learning classifiers through diverse coun-
terfactual explanations. In Proceedings of the 2020
Conference on Fairness, Accountability, and Trans-
parency, pages 607–617. ACM.
Pudjihartono, N., Fadason, T., Kempa-Liehr, A., and
O’Sullivan, J. (2022). A review of feature selection
methods for machine learning-based disease risk pre-
diction. Frontiers in Bioinformatics, 2:927312.
Qi, C., Diao, J., and Qiu, L. (2019). On estimating model in
feature selection with cross-validation. IEEE Access,
7:33454–33463.
Remeseiro, B. and Bolon-Canedo, V. (2019). A review
of feature selection methods in medical applications.
Computers in Biology and Medicine, 112:103375.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). Why
should I trust you? explaining the predictions of any
classifier. In HLT-NAACL Demos, pages 97–101. The
Association for Computational Linguistics.
Rostami, M., Forouzandeh, S., Berahmand, K., Soltani, M.,
Shahsavari, M., and Oussalah, M. (2022). Gene se-
lection for microarray data classification via multi-
objective graph theoretic-based method. Artificial In-
telligence in Medicine, 123:102228.
Scheda, R. and Diciotti, S. (2022). Explanations of machine
learning models in repeated nested cross-validation:
An application in age prediction using brain complex-
ity features. Applied Sciences, 12(13).
Szepannek, G. and L
¨
ubke, K. (2022). Explaining artificial
intelligence with care. KI - K
¨
unstliche Intelligenz.
Tjoa, E. and Guan, C. (2021). A survey on explainable ar-
tificial intelligence (xai): Toward medical xai. IEEE
transactions on neural networks and learning systems,
32(11):4793—4813.
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., and Zhu,
J. (2019). Explainable ai: A brief survey on history,
research areas, approaches and challenges. In Natural
Language Processing and Chinese Computing.
Xu, Y., Liu, Y., and Ma, J. (2022). Detection and de-
fense against DDoS attack on SDN controller based
on feature selection. In Chen, X., Huang, X., and
Kutyłowski, M., editors, Security and Privacy in So-
cial Networks and Big Data, pages 247–263, Singa-
pore. Springer Nature Singapore.
Yu, L. and Liu, H. (2003). Feature selection for high-
dimensional data: a fast correlation-based filter solu-
tion. In Proceedings of the International Conference
on Machine Learning (ICML), pages 856–863.
Yu, L. and Liu, H. (2004). Efficient feature selection via
analysis of relevance and redundancy. Journal of Ma-
chine Learning Research (JMLR), 5:1205–1224.
Leveraging Explainability with K-Fold Feature Selection
465