Migimatsu, T., Cheng-Yue, R., et al. (2015). An em-
pirical evaluation of deep learning on highway driv-
ing. arXiv preprint arXiv:1504.01716.
Karmon, D., Zoran, D., and Goldberg, Y. (2018). Lavan:
Localized and visible adversarial noise. In Interna-
tional Conference on Machine Learning, pages 2507–
2515. PMLR.
Lee, M. and Kolter, Z. (2019). On physical adversarial
patches for object detection. ICML Workshop on Se-
curity and Privacy of Machine Learning.
Li, R., Zhang, H., Yang, P., Huang, C.-C., Zhou, A., Xue,
B., and Zhang, L. (2021). Ensemble defense with data
diversity: Weak correlation implies strong robustness.
arXiv preprint arXiv:2106.02867.
Liu, X., Yang, H., Song, L., Li, H., and Chen, Y. (2019).
Dpatch: Attacking object detectors with adversarial
patches. AAAI Workshop on Artificial Intelligence
Safety.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2018). Towards deep learning models re-
sistant to adversarial attacks. In International Confer-
ence on Learning Representations.
Mustafa, A., Khan, S., Hayat, M., Goecke, R., Shen, J., and
Shao, L. (2019). Adversarial defense by restricting the
hidden space of deep neural networks. In Proceedings
of the IEEE/CVF International Conference on Com-
puter Vision, pages 3385–3394.
Naseer, M., Khan, S., and Porikli, F. Local gradients
smoothing: Defense against localized adversarial at-
tacks. In 2019 IEEE Winter Conference on Applica-
tions of Computer Vision (WACV), pages 1300–1307.
Papernot, N. and McDaniel, P. (2017). Extending defensive
distillation. arXiv preprint arXiv:1705.05264.
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik,
Z. B., and Swami, A. (2016a). The limitations of deep
learning in adversarial settings. In 2016 IEEE Euro-
pean symposium on security and privacy (EuroS&P),
pages 372–387. IEEE.
Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami,
A. (2016b). Distillation as a defense to adversarial
perturbations against deep neural networks. In 2016
IEEE symposium on security and privacy (SP), pages
582–597. IEEE.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
Chanan, G., Killeen, T., Lin, Z., Gimelshein, N.,
Antiga, L., et al. (2019). Pytorch: An imperative style,
high-performance deep learning library. Advances
in neural information processing systems, 32:8026–
8037.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why
should i trust you?” explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and
data mining, pages 1135–1144.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S.,
Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bern-
stein, M., et al. (2015). Imagenet large scale visual
recognition challenge. International journal of com-
puter vision, 115(3):211–252.
Shaham, U., Yamada, Y., and Negahban, S. (2015). Under-
standing adversarial training: Increasing local stabil-
ity of neural nets through robust optimization. arXiv
preprint arXiv:1511.05432.
Shao, R., Shi, Z., Yi, J., Chen, P.-Y., and Hsieh, C.-J.
(2021). Robust text captchas using adversarial exam-
ples. arXiv preprint arXiv:2101.02483.
Shi, C., Xu, X., Ji, S., Bu, K., Chen, J., Beyah, R., and
Wang, T. (2021). Adversarial captchas. IEEE Trans-
actions on Cybernetics.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2013).
Deep inside convolutional networks: Visualising im-
age classification models and saliency maps. arXiv
preprint arXiv:1312.6034.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Smilkov, D., Thorat, N., Kim, B., Vi
´
egas, F., and Watten-
berg, M. (2017). Smoothgrad: removing noise by
adding noise. arXiv preprint arXiv:1706.03825.
Su, J., Vargas, D. V., and Sakurai, K. (2019). One pixel at-
tack for fooling deep neural networks. IEEE Transac-
tions on Evolutionary Computation, 23(5):828–841.
Sundararajan, M., Taly, A., and Yan, Q. (2017). Axiomatic
attribution for deep networks. In International Confer-
ence on Machine Learning, pages 3319–3328. PMLR.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S.,
Anguelov, D., Erhan, D., Vanhoucke, V., and Rabi-
novich, A. (2015). Going deeper with convolutions.
In Proceedings of the IEEE conference on computer
vision and pattern recognition, pages 1–9.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wo-
jna, Z. (2016). Rethinking the inception architecture
for computer vision. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 2818–2826.
Thys, S., Van Ranst, W., and Goedem
´
e, T. (2019). Fooling
automated surveillance cameras: adversarial patches
to attack person detection. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition Workshops, pages 0–0.
Xu, W., Evans, D., and Qi, Y. Feature squeezing: Detecting
adversarial examples in deep neural networks. pages
18–21.
Enhanced Local Gradient Smoothing: Approaches to Attacked-region Identification and Defense
263