classification accuracy of the proposed method is
0.046~0.129 (D=16) and 0.044~0.116 (D=32). It is
noted that the classification accuracy of the original
CNN-based classifiers is zero (100% error) for the
adversarial images.
5 CONCLUSIONS
In this paper, we proposed modified kNN classifiers
for the output vector space of CNN-based classifiers
to provide robust performance against adversarial
attacks. To reduce the complexity problem of
conventional kNN classifiers when the number of
training samples is very large, we propose a modified
kNN classifier for CNN-based classifiers. The
proposed method was evaluated using 12 models and
showed noticeable improvement in reducing the
classification error caused by adversarial attacks. By
applying the kNN classifier in the middle layers, it
may be possible to further improve performance.
ACKNOWLEDGEMENTS
This research was supported in part by Basic Science
Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry
of Education, Science and Technology (NRF-
2020R1A2C1012221).
REFERENCES
Akhtar, Naveed and Ajmal Mian (2018). “Threat of
Adversarial Attacks on Deep Learning in Computer
Vision: A Survey,” IEEE Access.
Amodei, D., et al. (2016, June). Deep speech 2: End-to-end
speech recognition in English and Mandarin. In
International conference on machine learning (pp. 173-
182).
Barbu, A. et al. (2019). “Objectnet: A large-scale bias-
controlled dataset for pushing the limits of object
recognition models,” in Proc. Adv. Neural Inf. Process.
Syst., pp. 9448–9458.
Gibson, E., et al. (2018). NiftyNet: a deep-learning platform
for medical imaging. Computer methods and programs
in biomedicine, 158, 113-122.
Girshick, R., et al. (2014). Rich feature hierarchies for
accurate object detection and semantic segmentation. In
Proceedings of the IEEE conference on computer vision
and pattern recognition (pp. 580-587).
Goodfellow, I. J., J. Shlens, and C. Szegedy (2015).
“Explaining and harnessing adversarial examples,” in
Proc. Int. Conf. Learn. Representations.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). “Deep
Residual Learning for Image Recognition,”
Proceedings of the IEEE conference on Computer
Vision and Pattern Recognition, pp. 770-778.
Hendrycks, Dan, et al. (2021). "The many faces of
robustness: A critical analysis of out-of-distribution
generalization." Proceedings of the IEEE/CVF
International Conference on Computer Vision., pp.
8340-8349.
Huang, G., et al. (2017). “Densely connected convolutional
networks,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit., pp. 2261–2269.
Ilyas, Andrew, et al. (2019). “Adversarial examples are not
bugs, they are features,” arXiv:1905.02175.
Jin, K. H., McCann, M. T., Froustey, E., & Unser, M.
(2017). Deep convolutional neural network for inverse
problems in imaging. IEEE Transactions on Image
Processing, 26(9), 4509-4522.
Kim, J., et al. (2020). “Analyzing Decision Polygons of
DNN-based Classification Method, in Proc.
International Conference on Informatics in Control,
Automation and Robotics.
Kim, J., et al. (2016). Accurate image super-resolution
using very deep convolutional networks. In
Proceedings of the IEEE conference on computer vision
and pattern recognition (pp. 1646-1654).
Kim, J., et al., "Reliable Perceptual Loss Computation for
GAN-Based Super-Resolution With Edge Texture
Metric," in IEEE Access, vol. 9, pp. 120127-120137,
2021, doi: 10.1109/ACCESS.2021.3108394.
Koushik, J. (2016). Understanding convolutional neural
networks. arXiv preprint arXiv:1605.09081.
Lee, C., et al. (2021). “One-to-One Mapping-like Properties
of DCN-Based Super-Resolution and its Applicability
to Real-World Images,” IEEE Access, pp. 121167 –
121183.
Lim, B., et al. (2017). Enhanced deep residual networks for
single image super-resolution. In Proceedings of the
IEEE conference on computer vision and pattern
recognition workshops (pp. 136-144).
Ma, N., et al. (2018). “ShuffleNet V2: Practical guidelines
for efficient CNN architecture design,” in Proc. Eur.
Conf. Comput. Vis., Sep., pp. 122–138.
Mallat, S. (2016). Understanding deep convolutional
networks. Philosophical Transactions of the Royal
Society A: Mathematical, Physical and Engineering
Sciences, 374(2065), 20150203.
Ouyang, W., et al. (2015). Deepid-net: Deformable deep
convolutional neural networks for object detection. In
Proceedings of the IEEE conference on computer vision
and pattern recognition (pp. 2403-2412).
Radford, A., et al. (2015). Unsupervised representation
learning with deep convolutional generative adversarial
networks. arXiv preprint arXiv:1511.06434.
Sainath, T. N., et al. (2015). Deep convolutional neural
networks for large-scale speech tasks. Neural
Networks, 64, 39-48.
Sandler, M., et al. (2018). “MobileNetV2: Inverted
residuals and linear bottlenecks,” in Proc. IEEE Conf.
Comput. Vision Pattern Recognit., pp. 4510–4520
ICPRAM 2023 - 12th International Conference on Pattern Recognition Applications and Methods