
ACKNOWLEDGEMENTS
In the loving memory of TianHao Bu’s father
BingXin Bu: 10/01/1965 - 13/09/2023.
REFERENCES
(1998). Gradient-based learning applied to document recog-
nition. Proceedings of the IEEE, 86(11):2278–2324.
Bansal, M., Kumar, M., Sachdeva, M., and Mittal, A.
(2021). Transfer learning for image classification us-
ing vgg19: Caltech-101 image data set. Journal of
ambient intelligence and humanized computing, pages
1–12.
Canny, J. (1986). A computational approach to edge de-
tection. IEEE Transactions on pattern analysis and
machine intelligence, (6):679–698.
Chen, X. and Wang, G. (2021). Few-shot learning by inte-
grating spatial and frequency representation. In 2021
18th Conference on Robots and Vision (CRV), pages
49–56. IEEE.
Chlap, P., Min, H., Vandenberg, N., Dowling, J., Holloway,
L., and Haworth, A. (2021). A review of medical im-
age data augmentation techniques for deep learning
applications. Journal of Medical Imaging and Radia-
tion Oncology, 65(5):545–563.
Deng, G. and Cahill, L. (1993). An adaptive gaussian fil-
ter for noise reduction and edge detection. In 1993
IEEE conference record nuclear science symposium
and medical imaging conference, pages 1615–1619.
IEEE.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Kim, J.-H., Choo, W., and Song, H. O. (2020). Puzzle
mix: Exploiting saliency and local statistics for op-
timal mixup. In International Conference on Machine
Learning, pages 5275–5285. PMLR.
Liu, Z., Li, S., Wu, D., Liu, Z., Chen, Z., Wu, L., and Li,
S. Z. (2022). Automix: Unveiling the power of mixup
for stronger classifiers. In European Conference on
Computer Vision, pages 441–458. Springer.
Mukai, K., Kumano, S., and Yamasaki, T. (2022). Im-
proving robustness to out-of-distribution data by
frequency-based augmentation. In 2022 IEEE In-
ternational Conference on Image Processing (ICIP),
pages 3116–3120. IEEE.
Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. (2018).
Do cifar-10 classifiers generalize to cifar-10? arXiv
preprint arXiv:1806.00451.
Shao, S., Wang, Y., Liu, B., Liu, W., Wang, Y., and Liu,
B. (2023). Fads: Fourier-augmentation based data-
shunting for few-shot classification. IEEE Transac-
tions on Circuits and Systems for Video Technology.
Srivastava, R., Gupta, J., Parthasarthy, H., and Srivastava,
S. (2009). Pde based unsharp masking, crispening
and high boost filtering of digital images. In Contem-
porary Computing: Second International Conference,
IC3 2009, Noida, India, August 17-19, 2009. Proceed-
ings 2, pages 8–13. Springer.
Yin, W., Wang, H., Qu, J., and Xiong, C. (2021). Batch-
mixup: Improving training by interpolating hidden
states of the entire mini-batch. In Findings of the Asso-
ciation for Computational Linguistics: ACL-IJCNLP
2021, pages 4908–4912.
Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., and Yoo,
Y. (2019). Cutmix: Regularization strategy to train
strong classifiers with localizable features. In Pro-
ceedings of the IEEE/CVF international conference
on computer vision, pages 6023–6032.
Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D.
(2017). mixup: Beyond empirical risk minimization.
arXiv preprint arXiv:1710.09412.
Zhang, H., Zhang, L., and Jiang, Y. (2019). Overfitting
and underfitting analysis for deep learning based end-
to-end communication systems. In 2019 11th inter-
national conference on wireless communications and
signal processing (WCSP), pages 1–6. IEEE.
Zhou, Y., Wang, X., Zhang, M., Zhu, J., Zheng, R., and Wu,
Q. (2019). Mpce: a maximum probability based cross
entropy loss function for neural network classification.
IEEE Access, 7:146331–146341.
Image Edge Enhancement for Effective Image Classification
451