The Advancements of Pruning Techniques in Deep Neural Networks

Xinrui Li

2024

Abstract

Neural network pruning is widely used to compress large, over-parameterized Deep Neural Network (DNN) models. Previous studies have shown that pruning optimizes DNN models in the following aspects: saving resource consumption and storage space costs, improving model deployment range, improving inference speed, and achieving real-time service. This article summarizes the development and main progress of pruning technology up to now and discusses the two types of pruning, basic principles, advantages and disadvantages, and the performance in different application scenarios. The research prospect of combining automation technology with pruning technology is also prospected. Pruning can be divided into two categories including unstructured pruning and structural pruning. Unstructured pruning provides the flexibility to adjust parameters to task requirements while achieving a greater proportion of compression, while structural pruning improves model performance, stability, and interpretability by removing the entire module. Their respective characteristics make them suitable for models with different structural complexity and different task requirements. Although pruning is relatively well developed compared to other model compression techniques, there are still some challenges. The current pruning technology is still dependent on manual recognition of pruning parameters, and the technology of automatic recognition of pruning objects has not yet been developed. This paper summarizes the research status of pruning technology in the field of DNN and discusses current technological developments, applications, limitations, and challenges in development. The review concludes by highlighting some potential for future research in this area, including exploring automated pruning and the ability to enhance and transfer learning between fields.

Download


Paper Citation


in Harvard Style

Li X. (2024). The Advancements of Pruning Techniques in Deep Neural Networks. In Proceedings of the 1st International Conference on Data Science and Engineering - Volume 1: ICDSE; ISBN 978-989-758-690-3, SciTePress, pages 177-181. DOI: 10.5220/0012837000004547


in Bibtex Style

@conference{icdse24,
author={Xinrui Li},
title={The Advancements of Pruning Techniques in Deep Neural Networks},
booktitle={Proceedings of the 1st International Conference on Data Science and Engineering - Volume 1: ICDSE},
year={2024},
pages={177-181},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012837000004547},
isbn={978-989-758-690-3},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 1st International Conference on Data Science and Engineering - Volume 1: ICDSE
TI - The Advancements of Pruning Techniques in Deep Neural Networks
SN - 978-989-758-690-3
AU - Li X.
PY - 2024
SP - 177
EP - 181
DO - 10.5220/0012837000004547
PB - SciTePress