
Conference on Computer Vision and Pattern Recogni-
tion (CVPR), pages 770–778.
He, Y., Lin, J., Liu, Z., Wang, H., Li, L.-J., and Han, S.
(2018). AMC: AutoML for Model Compression and
Acceleration on Mobile Devices. In Ferrari, V., Hebert,
M., Sminchisescu, C., and Weiss, Y., editors, Computer
Vision – ECCV 2018, pages 815–832, Cham. Springer
International Publishing.
He, Y., Liu, P., Wang, Z., Hu, Z., and Yang, Y. (2019). Filter
Pruning via Geometric Median for Deep Convolutional
Neural Networks Acceleration. In 2019 IEEE/CVF
Conference on Computer Vision and Pattern Recogni-
tion (CVPR), pages 4335–4344.
Howard, A., Zhu, M., Chen, B., Kalenichenko, D., Wang,
W., Weyand, T., Andreetto, M., and Adam, H. (2017).
MobileNets: Efficient Convolutional Neural Networks
for Mobile Vision Applications. arXiv preprint
arXiv:1704.04861.
Hua, W., Zhou, Y., De Sa, C., Zhang, Z., and Suh, G. E.
(2019). Channel Gating Neural Networks. In Advances
in Neural Information Processing Systems, volume 32,
Red Hook, NY, USA. Curran Associates Inc.
Indyk, P. and Motwani, R. (1998). Approximate Nearest
Neighbors: Towards Removing the Curse of Dimen-
sionality. In Proceedings of the Thirtieth Annual ACM
Symposium on Theory of Computing, STOC ’98, page
604–613, New York, NY, USA. Association for Com-
puting Machinery.
Kitaev, N., Kaiser, L., and Levskaya, A. (2020). Reformer:
The Efficient Transformer. In International Conference
on Learning Representations.
Krizhevsky, A. (2009). Learning Multiple Layers of Features
from Tiny Images.
Li, F., Li, G., He, X., and Cheng, J. (2021). Dynamic
Dual Gating Neural Networks. In 2021 IEEE/CVF
International Conference on Computer Vision (ICCV),
pages 5310–5319.
Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P.
(2017). Pruning Filters for Efficient ConvNets. In In-
ternational Conference on Learning Representations.
Li, P., Hastie, T., and Church, K. (2006). Very Sparse Ran-
dom Projections. In Proceedings of the 12th ACM
SIGKDD International Conference on Knowledge Dis-
covery and Data Mining, volume 2006 of KDD ’06,
pages 287–296.
Lin, J., Rao, Y., Lu, J., and Zhou, J. (2017a). Runtime
Neural Pruning. In Guyon, I., Luxburg, U. V., Bengio,
S., Wallach, H., Fergus, R., Vishwanathan, S., and
Garnett, R., editors, Advances in Neural Information
Processing Systems, volume 30. Curran Associates,
Inc.
Lin, X., Zhao, C., and Pan, W. (2017b). Towards Accurate
Binary Convolutional Neural Network. In Proceed-
ings of the 31st International Conference on Neural
Information Processing Systems, pages 344–352.
Liu, L., Deng, L., Hu, X., Zhu, M., Li, G., Ding, Y., and Xie,
Y. (2019). Dynamic Sparse Graph for Efficient Deep
Learning. In International Conference on Learning
Representations.
Liu, Z., Coleman, B., and Shrivastava, A. (2021a). Ef-
ficient Inference via Universal LSH Kernel. CoRR,
abs/2106.11426.
Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C.
(2017). Learning Efficient Convolutional Networks
through Network Slimming. In 2017 IEEE Interna-
tional Conference on Computer Vision (ICCV), pages
2755–2763, Los Alamitos, CA, USA. IEEE Computer
Society.
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S.,
and Guo, B. (2021b). Swin Transformer: Hierarchical
Vision Transformer using Shifted Windows. In 2021
IEEE/CVF International Conference on Computer Vi-
sion (ICCV), pages 9992–10002. IEEE Computer So-
ciety.
Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T.,
and Xie, S. (2022). A ConvNet for the 2020s. In Pro-
ceedings of the IEEE/CVF Conference on Computer Vi-
sion and Pattern Recognition (CVPR), Proceedings of
the IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, pages 11966–11976.
IEEE Computer Society.
Liu, Z., Wang, P., and Li, Z. (2021c). More-Similar-Less-
Important: Filter Pruning VIA Kmeans Clustering. In
2021 IEEE International Conference on Multimedia
and Expo (ICME), pages 1–6.
Luo, J., Wu, J., and Lin, W. (2017). ThiNet: A Filter Level
Pruning Method for Deep Neural Network Compres-
sion. In IEEE International Conference on Computer
Vision, ICCV 2017, Venice, Italy, October 22-29, 2017,
pages 5068–5076. IEEE Computer Society.
Ma, N., Zhang, X., Zheng, H.-T., and Sun, J. (2018). Shuf-
fleNet V2: Practical Guidelines for Efficient CNN
Architecture Design. In Computer Vision – ECCV
2018: 15th European Conference, Munich, Germany,
September 8–14, 2018, Proceedings, Part XIV, pages
122––138.
M
¨
uller, T., Evans, A., Schied, C., and Keller, A. (2022).
Instant Neural Graphics Primitives with a Multiresolu-
tion Hash Encoding. ACM Trans. Graph., 41(4):102:1–
102:15.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
Chanan, G., Killeen, T., Lin, Z., Gimelshein, N.,
Antiga, L., Desmaison, A., K
¨
opf, A., Yang, E., DeVito,
Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner,
B., Fang, L., Bai, J., and Chintala, S. (2019). PyTorch:
An Imperative Style, High-Performance Deep Learn-
ing Library.
Phan, H. (2021). PyTorch models trained on CIFAR-
10 dataset. https://github.com/huyvnphan/PyTorch
CIFAR10.
Pleiss, G., Chen, D., Huang, G., Li, T., van der Maaten,
L., and Weinberger, K. Q. (2017). Memory-Efficient
Implementation of DenseNets. CoRR, abs/1707.06990.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S.,
Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein,
M., Berg, A., and Fei-Fei, L. (2015). ImageNet Large
Scale Visual Recognition Challenge. International
Journal of Computer Vision, 115(3):211–252.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and
Data-Free Dynamic Compression of CNNs for Tractable Efficiency
205