
volves generating physical samples for attacks, target-
ing the verification of models like facial recognition
systems at airports, terminals, and other locations.
REFERENCES
Bouwmans, T., Javed, S., Sultana, M., and Jung, S. K.
(2019). Deep neural network concepts for background
subtraction: A systematic review and comparative
evaluation. Neural Networks, 117:8–66.
Carlini, N. and Wagner, D. (2017). Towards evaluating the
robustness of neural networks. In SP, pages 39–57.
IEEE.
Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C.-J.
(2017). Zoo: Zeroth order optimization based black-
box attacks to deep neural networks without training
substitute models. In ACM, pages 15–26.
Correia-Silva, J. R., Berriel, R. F., Badue, C., de Souza,
A. F., and Oliveira-Santos, T. (2018). Copycat cnn:
Stealing knowledge by persuading confession with
random non-labeled data. In IJCNN, pages 1–8. IEEE.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Ex-
plaining and harnessing adversarial examples. arXiv
preprint arXiv:1412.6572.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In CVPR, pages
770–778.
Jiao, L., Zhang, F., Liu, F., Yang, S., Li, L., Feng, Z., and
Qu, R. (2019). A survey of deep learning-based object
detection. IEEE Access, 7:128837–128868.
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple
layers of features from tiny images.
LeCun, Y. (1998). The mnist database of handwritten digits.
IEEE Signal Processing Magazine, 29(6):141–142.
Liu, Y., Chen, X., Liu, C., and Song, D. (2016). Delving
into transferable adversarial examples and black-box
attacks. arXiv preprint arXiv:1611.02770.
Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O., and
Frossard, P. (2017a). Universal adversarial perturba-
tions. In CVPR, pages 1765–1773.
Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O., Frossard,
P., and Soatto, S. (2017b). Robustness of classifiers
to universal perturbations: A geometric perspective.
arXiv preprint arXiv:1705.09554.
Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P.
(2016). Deepfool: a simple and accurate method to
fool deep neural networks. In CVPR, pages 2574–
2582.
Nguyen, D. C., Le, N. D., Nguyen, T. C., Nguyen, T. Q.,
and Nguyen, V. Q. (2022). An approach to evaluate
the reliability of the face recognition process using ad-
versarial samples generated by deep neural networks.
In ICISN 2022, pages 237–245. Springer.
Papernot, N., McDaniel, P., and Goodfellow, I. (2016a).
Transferability in machine learning: from phenomena
to black-box attacks using adversarial samples. arXiv
preprint arXiv:1605.07277.
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik,
Z. B., and Swami, A. (2017). Practical black-box at-
tacks against machine learning. In ACM ASIACCS,
pages 506–519.
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik,
Z. B., and Swami, A. (2016b). The limitations of deep
learning in adversarial settings. In EuroS&P, pages
372–387. IEEE.
Serban, A., Poll, E., and Visser, J. (2020). Adversarial ex-
amples on object recognition: A comprehensive sur-
vey. CSUR, 53(3):1–38.
Shi, Y., Sagduyu, Y., and Grushin, A. (2017). How to steal
a machine learning classifier with deep learning. In
HST, pages 1–5. IEEE.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Tram
`
er, F., Zhang, F., Juels, A., Reiter, M. K., and Ris-
tenpart, T. (2016). Stealing machine learning models
via prediction {APIs}. In USENIX Security 16, pages
601–618.
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
670