Figure 6: Concept visualisation using the CGAP (top row)
and CPHA (bottom row) on an image labelled ”Chain saw”.
5 CONCLUSION
Analyzing and visualizing concepts is key to under-
standing model predictions. By clustering activa-
tions with similar patterns, we gain insights into the
model’s learned knowledge. We use two methods for
concept extraction: CGAP, which focuses on general
activation patterns, and CPHA, which targets high
activation areas. Decomposing concepts into sub-
concepts helps avoid mixing conflicting elements and
compensates for clustering imperfections.
Our approach is limited by its focus on individual
images, neglecting relationships between activations
across images. Future work could explore clustering
within the same class. While our method highlights
relevant image parts for classification, incorrect clas-
sifications still require human interpretation.
ACKNOWLEDGEMENTS
We appreciate the ECE for funding the Lambda Quad
Max Deep Learning server, which is employed to ob-
tain the results in the present work.
REFERENCES
Atakishiyev, S., Salameh, M., Yao, H., and Goebel, R.
(2024). Explainable Artificial Intelligence for Au-
tonomous Driving: A Comprehensive Overview and
Field Guide for Future Research Directions.
Brock, A., De, S., and Smith, S. L. (2021a). Characteriz-
ing signal propagation to close the performance gap in
unnormalized ResNets.
Brock, A., De, S., Smith, S. L., and Simonyan, K. (2021b).
High-Performance Large-Scale Image Recognition
Without Normalization.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-
Fei, L. (2009). Imagenet: A large-scale hierarchical
image database. In 2009 IEEE Conference on Com-
puter Vision and Pattern Recognition, pages 248–255.
Ding, C. and He, X. (2004). K-means clustering via prin-
cipal component analysis. In Twenty-First Interna-
tional Conference on Machine Learning - ICML ’04,
page 29, Banff, Alberta, Canada. ACM Press.
Fel, T., Picard, A., Bethune, L., Boissin, T., Vigouroux, D.,
Colin, J., Cad
`
ene, R., and Serre, T. (2023). CRAFT:
Concept Recursive Activation FacTorization for Ex-
plainability.
Ghalebikesabi, S., Ter-Minassian, L., Diaz-Ordaz, K., and
Holmes, C. (2021). On Locality of Local Explanation
Models.
Ghorbani, A., Wexler, J., Zou, J., and Kim, B. (2019). To-
wards Automatic Concept-based Explanations.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Resid-
ual Learning for Image Recognition. In 2016 IEEE
Conference on Computer Vision and Pattern Recog-
nition (CVPR), pages 770–778, Las Vegas, NV, USA.
IEEE.
Itseez (2015). Open source computer vision library. https:
//github.com/itseez/opencv.
Kim, S. S. Y., Meister, N., Ramaswamy, V. V., Fong, R.,
and Russakovsky, O. (2022). HIVE: Evaluating the
Human Interpretability of Visual Explanations.
Lambert, A., Soni, A., Soukane, A., Cherif, A. R., and Ra-
bat, A. (2024). Artificial intelligence modelling hu-
man mental fatigue: A comprehensive survey. Neuro-
computing, 567:126999.
Lapuschkin, S., W
¨
aldchen, S., Binder, A., Montavon, G.,
Samek, W., and M
¨
uller, K.-R. (2019). Unmasking
Clever Hans predictors and assessing what machines
really learn. Nature Communications, 10(1):1096.
Petsiuk, V., Das, A., and Saenko, K. (2018). RISE: Ran-
domized Input Sampling for Explanation of Black-
box Models.
Selvaraju, R. R., Das, A., Vedantam, R., Cogswell, M.,
Parikh, D., and Batra, D. (2016). Grad-CAM: Why
did you say that? In NIPS. arXiv.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2014). Deep
Inside Convolutional Networks: Visualising Image
Classification Models and Saliency Maps.
Sivanandan, R. and Jayakumari, J. (2020). An Improved Ul-
trasound Tumor Segmentation Using CNN Activation
Map Clustering and Active Contours. In 2020 IEEE
5th International Conference on Computing Commu-
nication and Automation (ICCCA), pages 263–268,
Greater Noida, India. IEEE.
Smilkov, D., Thorat, N., Kim, B., Vi
´
egas, F., and Watten-
berg, M. (2017). SmoothGrad: Removing noise by
adding noise.
Wickramanayake, S., Hsu, W., and Lee, M. L. (2021).
Comprehensible Convolutional Neural Networks via
Guided Concept Learning.
Zhang, R., Madumal, P., Miller, T., Ehinger, K. A., and Ru-
binstein, B. I. P. (2021). Invertible Concept-based Ex-
planations for CNN Models with Non-negative Con-
cept Activation Vectors.
Zhang, Y., Weng, Y., and Lund, J. (2022). Applications
of Explainable Artificial Intelligence in Diagnosis and
Surgery. Diagnostics, 12(2):237.
KEOD 2024 - 16th International Conference on Knowledge Engineering and Ontology Development
158