
ment classification in the agriculture and forestry do-
mains. The lack of consistency among attribution
maps and the disparity between the maps and expert
annotations underscore the need for further research
and development in this field. Future work should fo-
cus on developing improved attribution methods that
address the limitations identified in this study. Addi-
tionally, there is a necessity to find metrics that can
objectively evaluate the attribution maps, as current
metrics such as ”Insertion” and ”Deletion” may lead
to conflicting results. By addressing these challenges,
we can enhance the interpretability and trustworthi-
ness of neural networks in critical applications within
agriculture and forestry.
REFERENCES
Ancona, M., Ceolini, E.,
¨
Oztireli, A. C., and Gross,
M. H. (2017). A unified view of gradient-based at-
tribution methods for deep neural networks. CoRR,
abs/1711.06104.
Chattopadhay, A., Sarkar, A., Howlader, P., and Balasub-
ramanian, V. N. (2018). Grad-CAM++: Generalized
gradient-based visual explanations for deep convolu-
tional networks. In 2018 IEEE Winter Conference on
Applications of Computer Vision (WACV). IEEE.
Desai, S. and Ramaswamy, H. G. (2020). Ablation-cam: Vi-
sual explanations for deep convolutional network via
gradient-free localization. In 2020 IEEE Winter Con-
ference on Applications of Computer Vision (WACV),
pages 972–980.
Englebert, A., Cornu, O., and De Vleeschouwer, C. (2022).
Poly-cam: High resolution class activation map for
convolutional neural networks.
Fang, A., Kornblith, S., and Schmidt, L. (2023). Does
progress on ImageNet transfer to real-world datasets?
Fong, R., Patrick, M., and Vedaldi, A. (2019). Under-
standing deep networks via extremal perturbations
and smooth masks. CoRR, abs/1910.08485.
Fong, R. and Vedaldi, A. (2017). Interpretable explanations
of black boxes by meaningful perturbation. CoRR,
abs/1704.03296.
Fu, R., Hu, Q., Dong, X., Guo, Y., Gao, Y., and Li,
B. (2020). Axiom-based grad-cam: Towards accu-
rate visualization and explanation of cnns. CoRR,
abs/2008.02312.
Gomez, T., Fr
´
eour, T., and Mouch
`
ere, H. (2022). Metrics
for saliency map evaluation of deep learning explana-
tion methods. CoRR, abs/2201.13291.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep resid-
ual learning for image recognition.
Huang, G., Liu, Z., van der Maaten, L., and Weinberger,
K. Q. (2018). Densely connected convolutional net-
works.
Jiang, P.-T., Zhang, C.-B., Hou, Q., Cheng, M.-M., and Wei,
Y. (2021). Layercam: Exploring hierarchical class ac-
tivation maps for localization. IEEE Transactions on
Image Processing, 30:5875–5888.
Kapishnikov, A., Venugopalan, S., Avci, B., Wedin, B.,
Terry, M., and Bolukbasi, T. (2021). Guided inte-
grated gradients: An adaptive path method for remov-
ing noise. CoRR, abs/2106.09788.
Koch, G. and Koch, S. (2022). Holzartenwissen im app-
format : neue app ”macroholzdata” zur holzartenbes-
timmung und -beschreibung. Furnier-Magazin,
26:52–56.
Li, H., Li, Z., Ma, R., and Wu, T. (2022). Fd-cam: Improv-
ing faithfulness and discriminability of visual expla-
nation for cnns.
Liu, Z., Mao, H., Wu, C., Feichtenhofer, C., Darrell, T.,
and Xie, S. (2022). A convnet for the 2020s. CoRR,
abs/2201.03545.
Muhammad, M. B. and Yeasin, M. (2020). Eigen-cam:
Class activation map using principal components.
CoRR, abs/2008.00299.
Naidu, R., Ghosh, A., Maurya, Y., K, S. R. N., and
Kundu, S. S. (2020). IS-CAM: integrated score-
cam for axiomatic-based explanations. CoRR,
abs/2010.03023.
Nieradzik, L., Sieburg-Rockel, J., Helmling, S., Keuper,
J., Weibel, T., Olbrich, A., and Stephani, H. (2023).
Automating wood species detection and classification
in microscopic images of fibrous materials with deep
learning.
Omeiza, D., Speakman, S., Cintas, C., and Weldemariam,
K. (2019). Smooth grad-cam++: An enhanced infer-
ence level visualization technique for deep convolu-
tional neural network models. CoRR, abs/1908.01224.
Petsiuk, V., Das, A., and Saenko, K. (2018a). Rise: Ran-
domized input sampling for explanation of black-box
models.
Petsiuk, V., Das, A., and Saenko, K. (2018b). RISE: ran-
domized input sampling for explanation of black-box
models. CoRR, abs/1806.07421.
Poppi, S., Cornia, M., Baraldi, L., and Cucchiara, R. (2021).
Revisiting the evaluation of class activation mapping
for explainability: A novel metric and experimental
analysis. CoRR, abs/2104.10252.
Raatikainen, L. and Rahtu, E. (2022). The weighting game:
Evaluating quality of explainability methods.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”why
should I trust you?”: Explaining the predictions of any
classifier. CoRR, abs/1602.04938.
Richter, H. G. and Dallwitz (2000-onwards). Commercial
timbers: Descriptions, illustrations, identification, and
information retrieval. (accessed on 15 May 2023).
Saporta, A., Gui, X., Agrawal, A., Pareek, A., Truong, S.
Q. H., Nguyen, C. D. T., Ngo, V.-D., Seekins, J.,
Blankenberg, F. G., Ng, A. Y., Lungren, M. P., and
Rajpurkar, P. (2022). Benchmarking saliency methods
for chest x-ray interpretation. Nature Machine Intelli-
gence, 4(10):867–878.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R.,
Parikh, D., and Batra, D. (2019). Grad-CAM: Visual
explanations from deep networks via gradient-based
Challenging the Black Box: A Comprehensive Evaluation of Attribution Maps of CNN Applications in Agriculture and Forestry
491