loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Sven Mantowksy ; Firas Mualla ; Saqib Sayad Bukhari and Georg Schneider

Affiliation: ZF Friedrichshafen AG, AI-Lab, Saarbrücken, Germany

Keyword(s): Pruning, Explainability, Calibration.

Abstract: The popularity of deep neural networks (DNNs) and their application on embedded systems and edge devices is increasing rapidly. Most embedded systems are limited in their computational capabilities and memory space. To meet these restrictions, the DNNs need to be compressed while keeping their accuracy, for instance, by pruning the least important neurons or filters. However, the pruning may introduce other effects on the model, such as influencing the robustness of its predictions. To analyze the impact of pruning on the model robustness, we employ two metrics: heatmap based correlation coefficient (HCC) and expected calibration error (ECE). Using the HCC, on one hand it is possible to gain insight to which extent a model and its compressed version tend to use the same input features. On the other hand, using the difference in the ECE between a model and its compressed version, we can analyze the side effect of pruning on the model’s decision reliability. The experiments were conduc ted for image classification and object detection problems. For both types of issues, our results show that some off-the-shelf pruning methods considerably improve the model calibration without being specifically designed for this purpose. For instance, the ECE of a VGG16 classifier is improved by 35% after being compressed by 50% using the H-Rank pruning method with a negligible loss in accuracy. Larger compression ratios reduce the accuracy as expected but may improve the calibration drastically (e.g. ECE is reduced by 77% under a compression ratio of 70%). Moreover, the HCC measures feature saliency under model compression and tends to correlate as expected positively with the model’s accuracy. The proposed metrics can be employed for comparing pruning methods from another perspective than the commonly considered trade-off between the accuracy and compression ratio. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.141.46.208

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Mantowksy, S., Mualla, F., Sayad Bukhari, S. and Schneider, G. (2023). DNN Pruning and Its Effects on Robustness. In Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods - ICPRAM; ISBN 978-989-758-626-2; ISSN 2184-4313, SciTePress, pages 82-88. DOI: 10.5220/0011651000003411

@conference{icpram23,
author={Sven Mantowksy and Firas Mualla and Saqib {Sayad Bukhari} and Georg Schneider},
title={DNN Pruning and Its Effects on Robustness},
booktitle={Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods - ICPRAM},
year={2023},
pages={82-88},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011651000003411},
isbn={978-989-758-626-2},
issn={2184-4313},
}

TY - CONF

JO - Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods - ICPRAM
TI - DNN Pruning and Its Effects on Robustness
SN - 978-989-758-626-2
IS - 2184-4313
AU - Mantowksy, S.
AU - Mualla, F.
AU - Sayad Bukhari, S.
AU - Schneider, G.
PY - 2023
SP - 82
EP - 88
DO - 10.5220/0011651000003411
PB - SciTePress