
of the tested explainability models, and also that the
perturbation techniques interfere with the XAI meth-
ods in different ways, such as the Grad-CAM method,
which had different behaviors. for each technique
tested.
The black color perturbation technique generated
more iterations, and consequently more images, in
three of the four XAI methods presented. Therefore,
it can be concluded that this technique has the least ca-
pacity to generate impact on the segmentation model
among the techniques tested in this work. Still ac-
cording to the average number of iterations and the
average number of images generated, it is possible to
conclude that the Saliency Map method was the least
sensitive to the different perturbation methods and,
therefore, the best among the XAI methods tested on
the problem in question. The CNN Filters method
was the least sensitive to the types of disturbance, pre-
senting less variation in the average number of im-
ages, while the Grad-CAM method was the most sen-
sitive among the four.
For future work, it is suggested that the experi-
ments presented here be replicated in other AI mod-
els and other explainability methods, as well as in
other scenarios beyond image segmentation. Further-
more, it is interesting to test other existing perturba-
tion techniques and their combinations with explain-
ability methods to identify their influence on the pre-
dictive capacity of the models. Finally, more evidence
about this influence can be gathered and from this,
it can be quantified which combination of the XAI
method and pixel perturbation is best for a given prob-
lem.
REFERENCES
Abanda, A., Mori, U., and Lozano, J. (2022). Ad-hoc expla-
nation for time series classification. Knowledge-Based
Systems, 252:109366.
Arnout, H., El-Assady, M., Oelke, D., and Keim, D. A.
(2019). Towards a rigorous evaluation of xai meth-
ods on time series. In 2019 IEEE/CVF International
Conference on Computer Vision Workshop (ICCVW),
pages 4197–4201.
Camacho, D. M., Collins, K. M., Powers, R. K., Costello,
J. C., and Collins, J. J. (2018). Next-generation
machine learning for biological networks. Cell,
173(7):1581–1592.
Chatterjee, S., Das, A., Mandal, C., Mukhopadhyay, B.,
Vipinraj, M., Shukla, A., Nagaraja Rao, R., Sarasaen,
C., Speck, O., and N
¨
urnberger, A. (2022). Torch-
esegeta: Framework for interpretability and explain-
ability of image-based deep learning models. Applied
Sciences, 12(4):1834.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-
Fei, L. (2009). Imagenet: A large-scale hierarchical
image database. In 2009 IEEE conference on com-
puter vision and pattern recognition, pages 248–255.
Ieee.
Deng, L. and Yu, D. (2014). Deep learning: methods and
applications. Foundations and trends in signal pro-
cessing, 7(3–4):197–387.
D
´
ıaz-Rodr
´
ıguez, N., Lamas, A., Sanchez, J., Franchi,
G., Donadello, I., Tabik, S., Filliat, D., Cruz, P.,
Montes, R., and Herrera, F. (2022). Explainable
neural-symbolic learning (x-nesyl) methodology to
fuse deep learning representations with expert knowl-
edge graphs: The monumai cultural heritage use case.
Information Fusion, 79:58–83.
Doshi-Velez, F. and Kim, B. (2017). Towards a rigorous sci-
ence of interpretable machine learning. arXiv preprint
arXiv:1702.08608.
Erhan, D., Bengio, Y., Courville, A., and Vincent, P. (2009).
Visualizing higher-layer features of a deep network.
University of Montreal, 1341(3):1.
Fellous, J.-M., Sapiro, G., Rossi, A., Mayberg, H., and Fer-
rante, M. (2019). Explainable artificial intelligence for
neuroscience: Behavioral neurostimulation. Frontiers
in Neuroscience, 13. cited By 0.
Ghassemi, M., Oakden-Rayner, L., and Beam, A. L. (2021).
The false hope of current approaches to explainable
artificial intelligence in health care. The Lancet Digi-
tal Health, 3(11):e745–e750.
Gipi
ˇ
skis, R., Chiaro, D., Preziosi, M., Prezioso, E., and Pic-
cialli, F. (2023). The impact of adversarial attacks on
interpretable semantic segmentation in cyber-physical
systems. IEEE Systems Journal.
Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y.
(2016). Deep learning, volume 1. MIT press Cam-
bridge.
Gumpfer, N., Prim, J., Keller, T., Seeger, B., Guckert, M.,
and Hannig, J. (2023). Signed explanations: Unveil-
ing relevant features by reducing bias. Information
Fusion, page 101883.
Gunning, D. (2017). Explainable artificial intelligence
(xai). Defense Advanced Research Projects Agency
(DARPA), nd Web, 2:2.
Gunning, D. and Aha, D. (2019). Darpa’s explainable artifi-
cial intelligence program. AI Magazine, 40(2):44–58.
cited By 6.
Gupta, L. K., Koundal, D., and Mongia, S. (2023). Ex-
plainable methods for image-based deep learning: a
review. Archives of Computational Methods in Engi-
neering, 30(4):2651–2666.
He, K., Gkioxari, G., Doll
´
ar, P., and Girshick, R. B. (2017).
Mask R-CNN. CoRR, abs/1703.06870.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep
residual learning for image recognition. CoRR,
abs/1512.03385.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Iden-
tity mappings in deep residual networks. CoRR,
abs/1603.05027.
Hendrycks, D. and Dietterich, T. (2019). Benchmarking
neural network robustness to common corruptions and
perturbations. arXiv preprint arXiv:1903.12261.
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
630