
REFERENCES
Calafiore, G., Dabbene, F., and Tempo, R. (1998). Uni-
form sample generation in l/sub p/balls for probabilis-
tic robustness analysis. In Proceedings of the 37th
IEEE Conference on Decision and Control (Cat. No.
98CH36171), volume 3, pages 3335–3340. IEEE.
Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber,
J., Tsipras, D., Goodfellow, I., Madry, A., and Ku-
rakin, A. (2019). On evaluating adversarial robust-
ness. arXiv preprint arXiv:1902.06705.
Carlini, N. and Wagner, D. (2017). Towards evaluating the
robustness of neural networks. In 2017 IEEE Sym-
posium on Security and Privacy (SP), pages 39–57.
IEEE.
Cohen, J. M., Rosenfeld, E., and Kolter, J. Z. (08.02.2019).
Certified adversarial robustness via randomized
smoothing. International Conference on Machine
Learning (ICML) 2019, page 36.
Croce, F. and Hein, M. (2020). Reliable evaluation of
adversarial robustness with an ensemble of diverse
parameter-free attacks. In International conference on
machine learning, pages 2206–2216. PMLR.
Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. V. (2020).
Randaugment: Practical automated data augmentation
with a reduced search space. In Proceedings of the
IEEE/CVF conference on computer vision and pattern
recognition workshops, pages 702–703.
Dai, W. and Berleant, D. (2021). Benchmarking robustness
of deep learning classifiers using two-factor perturba-
tion. In 2021 IEEE International Conference on Big
Data (Big Data), pages 5085–5094. IEEE.
Dodge, S. and Karam, L. (2017). A study and comparison
of human and deep learning recognition performance
under visual distortions. In 2017 26th international
conference on computer communication and networks
(ICCCN), pages 1–7. IEEE.
Drenkow, N., Sani, N., Shpitser, I., and Unberath, M.
(2021). A systematic review of robustness in deep
learning for computer vision: Mind the gap? arXiv
preprint arXiv:2112.00639.
Erichson, N. B., Lim, S. H., Utrera, F., Xu, W., Cao,
Z., and Mahoney, M. W. (2022). Noisymix: Boost-
ing robustness by combining data augmentations, sta-
bility training, and noise injections. arXiv preprint
arXiv:2202.01263, 1.
Fawzi, A., Fawzi, H., and Fawzi, O. (2018a). Adversar-
ial vulnerability for any classifier. 32nd Conference
on Neural Information Processing Systems (NeurIPS
2018), Montr
´
eal, Canada.
Fawzi, A., Fawzi, O., and Frossard, P. (2018b). Analysis
of classifiers’ robustness to adversarial perturbations.
Machine Learning, 107(3):481–508.
Ford, N., Gilmer, J., Carlini, N., and Cubuk, E. D. (2019).
Adversarial examples are a natural consequence of
test error in noise. Prroceedings of the 36 th Inter-
national Conference on Machine Learning (ICML),
Long Beach, California, PMLR 97, 2019.
Hendrycks, D. and Dietterich, T. (28.03.2019). Bench-
marking neural network robustness to common cor-
ruptions and perturbations. International Conference
on Learning Representations (ICLR) 2019, page 16.
Hendrycks, D., Mu, N., Cubuk, E. D., Zoph, B., Gilmer, J.,
and Lakshminarayanan, B. (05.12.2019). Augmix: A
simple data processing method to improve robustness
and uncertainty. International Conference on Learn-
ing Representations (ICLR) 2020, page 15.
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger,
K. Q. (2017). Densely connected convolutional net-
works. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 4700–
4708.
Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y.,
Thamo, E., Wu, M., and Yi, X. (2020). A sur-
vey of safety and trustworthiness of deep neural net-
works: Verification, testing, adversarial attack and de-
fence, and interpretability. Computer Science Review,
37:100270.
Kireev, K., Andriushchenko, M., and Flammarion, N.
(2022). On the effectiveness of adversarial training
against common corruptions. In Uncertainty in Artifi-
cial Intelligence, pages 1012–1021. PMLR.
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple
layers of features from tiny images. Toronto, Canada.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2017). Im-
agenet classification with deep convolutional neural
networks. Communications of the ACM, 60(6):84–90.
Le, Y. and Yang, X. (2015). Tiny imagenet visual recogni-
tion challenge. CS 231N, 2015.
Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., and
Jana, S. (2019). Certified robustness to adversarial ex-
amples with differential privacy. In 2019 IEEE sym-
posium on security and privacy (SP), pages 656–672.
IEEE.
Lim, S. H., Erichson, N. B., Utrera, F., Xu, W., and Ma-
honey, M. W. (2021). Noisy feature mixup. In Inter-
national Conference on Learning Representations.
Lopes, R. G., Yin, D., Poole, B., Gilmer, J., and Cubuk,
E. D. (2019). Improving robustness without sacrific-
ing accuracy with patch gaussian augmentation. arXiv
preprint arXiv:1906.02611.
Loshchilov, I. and Hutter, F. (2016). Sgdr: Stochastic gradi-
ent descent with warm restarts. In International Con-
ference on Learning Representations.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (19.06.2017). Towards deep learning mod-
els resistant to adversarial attacks. International Con-
ference on Learning Representations (ICLR) 2018,
page 28.
Mintun, E., Kirillov, A., and Xie, S. (2021). On interaction
between augmentations and corruptions in natural cor-
ruption robustness. Advances in Neural Information
Processing Systems, 34:3571–3583.
M
¨
uller, S. G. and Hutter, F. (2021). Trivialaugment:
Tuning-free yet state-of-the-art data augmentation. In
Proceedings of the IEEE/CVF international confer-
ence on computer vision, pages 774–782.
Rusak, E., Schott, L., Zimmermann, R. S., Bitterwolf, J.,
Bringmann, O., Bethge, M., and Brendel, W. (2020).
A simple way to make neural networks robust against
Investigating the Corruption Robustness of Image Classifiers with Random p-norm Corruptions
179