Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep
Learning. The MIT Press.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Ex-
plaining and harnessing adversarial examples. CoRR,
abs/1412.6572.
Gu, S. and Rigazio, L. (2014). Towards deep neural network
architectures robust to adversarial examples. arXiv
preprint arXiv:1412.5068.
Guo, C., Rana, M., Cisse, M., and van der Maaten, L.
(2017). Countering adversarial images using input
transformations. arXiv preprint arXiv:1711.00117.
Hastie, T., Tibshirani, R., and Wainwright, M. (2015). Sta-
tistical learning with sparsity: the lasso and general-
izations. CRC press.
Krizhevsky, A. and Hinton, G. (2009). Learning multiple
layers of features from tiny images. Technical report,
Citeseer.
Kurakin, A., Goodfellow, I., and Bengio, S. (2016). Adver-
sarial examples in the physical world. arXiv preprint
arXiv:1607.02533.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learn-
ing. nature, 521(7553):436.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998).
Gradient-based learning applied to document recogni-
tion. Proceedings of the IEEE, 86(11):2278–2324.
Lee, H., Han, S., and Lee, J. (2017). Generative adversarial
trainer: Defense to adversarial perturbations with gan.
arXiv preprint arXiv:1705.03387.
Liao, F., Liang, M., Dong, Y., Pang, T., Zhu, J., and Hu,
X. (2017). Defense against adversarial attacks us-
ing high-level representation guided denoiser. arXiv
preprint arXiv:1712.02976.
Liu, Y., Chen, X., Liu, C., and Song, D. (2016). Delving
into transferable adversarial examples and black-box
attacks. arXiv preprint arXiv:1611.02770.
Luo, Y., Boix, X., Roig, G., Poggio, T., and Zhao, Q.
(2015). Foveation-based mechanisms alleviate adver-
sarial examples. arXiv preprint arXiv:1511.06292.
Ma, X., Li, B., Wang, Y., Erfani, S. M., Wijewickrema, S.,
Schoenebeck, G., Song, D., Houle, M. E., and Bai-
ley, J. (2018). Characterizing adversarial subspaces
using local intrinsic dimensionality. arXiv preprint
arXiv:1801.02613.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2017). Towards deep learning mod-
els resistant to adversarial attacks. arXiv preprint
arXiv:1706.06083.
Miyato, T., Dai, A. M., and Goodfellow, I. (2016). Adver-
sarial training methods for semi-supervised text clas-
sification. arXiv preprint arXiv:1605.07725.
Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P.
(2016). Deepfool: a simple and accurate method to
fool deep neural networks. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recogni-
tion, pages 2574–2582.
Nayebi, A. and Ganguli, S. (2017). Biologically inspired
protection of deep networks from adversarial attacks.
arXiv preprint arXiv:1703.09202.
Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A.
(2016). Distillation as a defense to adversarial pertur-
bations against deep neural networks. In 2016 IEEE
Symposium on Security and Privacy (SP), pages 582–
597. IEEE.
Poursaeed, O., Katsman, I., Gao, B., and Belongie, S.
(2017). Generative adversarial perturbations. arXiv
preprint arXiv:1712.02328.
Ramachandran, P., Zoph, B., and Le, Q. V. (2017).
Searching for activation functions. arXiv preprint
arXiv:1710.05941.
Rauber, J., Brendel, W., and Bethge, M. (2017). Fool-
box v0. 8.0: A python toolbox to benchmark the ro-
bustness of machine learning models. arXiv preprint
arXiv:1707.04131.
Ross, A. S. and Doshi-Velez, F. (2017). Improving the ad-
versarial robustness and interpretability of deep neural
networks by regularizing their input gradients. arXiv
preprint arXiv:1711.09404.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and
Chen, L.-C. (2018). Mobilenetv2: Inverted residu-
als and linear bottlenecks. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recogni-
tion, pages 4510–4520.
Sarkar, S., Bansal, A., Mahbub, U., and Chellappa, R.
(2017). Upset and angri: Breaking high performance
image classifiers. arXiv preprint arXiv:1707.01159.
Shen, S., Jin, G., Gao, K., and Zhang, Y. (2017). APW-
GAN: Adversarial perturbation elimination with
GAN. arXiv preprint arXiv:1707.05474.
Su, J., Vargas, D. V., and Kouichi, S. (2017). One pixel at-
tack for fooling deep neural networks. arXiv preprint
arXiv:1710.08864.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Er-
han, D., Goodfellow, I., and Fergus, R. (2013). In-
triguing properties of neural networks. arXiv preprint
arXiv:1312.6199.
Tanay, T. and Griffin, L. (2016). A boundary tilting
persepective on the phenomenon of adversarial exam-
ples. arXiv preprint arXiv:1608.07690.
Xu, W., Evans, D., and Qi, Y. (2017). Feature squeez-
ing: Detecting adversarial examples in deep neural
networks. arXiv preprint arXiv:1704.01155.
Yao, Z., Gholami, A., Xu, P., Keutzer, K., and Mahoney,
M. (2018). Trust region based adversarial attack on
neural networks. arXiv preprint arXiv:1812.06371.
Zagoruyko, S. and Komodakis, N. (2016). Wide residual
networks. arXiv preprint arXiv:1605.07146.
Zantedeschi, V., Nicolae, M.-I., and Rawat, A. (2017). Effi-
cient defenses against adversarial attacks. In Proceed-
ings of the 10th ACM Workshop on Artificial Intelli-
gence and Security, pages 39–49. ACM.
ICPRAM 2020 - 9th International Conference on Pattern Recognition Applications and Methods
112