propose to understand more deeply which features are
selected to be preserved in each style and which dis-
tortion they could generate through the network lay-
ers.
ACKNOWLEDGEMENTS
This work was supported by Carlos Chagas Filho
Foundation for Research Support of Rio de Janeiro
State (FAPERJ)-Brazil (grant #E-26/201.424/2021),
S
˜
ao Paulo Research Foundation (FAPESP)-Brazil
(grant #2021/07012-0), and the School of Ap-
plied Mathematics at Fundac¸
˜
ao Getulio Vargas
(FGV/EMAp). Any opinions, findings, conclusions,
or recommendations expressed in this material are
those of the authors and do not necessarily reflect the
views of the FAPESP, FAPERJ, or FGV.
REFERENCES
Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt,
M., and Kim, B. (2018). Sanity checks for saliency
maps.
Chollet, F. (2016). Xception: Deep Learning with Depth-
wise Separable Convolutions.
DeVries, T. and Taylor, G. W. (2017). Improved Regulariza-
tion of Convolutional Neural Networks with Cutout.
Dosovitskiy, A., Fischer, P., Springenberg, J. T., Riedmiller,
M., and Brox, T. (2014). Discriminative Unsupervised
Feature Learning with Exemplar Convolutional Neu-
ral Networks.
Gatys, L. A., Ecker, A. S., and Bethge, M. (2015). A Neural
Algorithm of Artistic Style.
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wich-
mann, F. A., and Brendel, W. (2018). ImageNet-
trained CNNs are biased towards texture; increasing
shape bias improves accuracy and robustness.
Georgievski, B. (2019). Image Augmentation with Neural
Style Transfer. pages 212–224.
Ghiasi, G., Lee, H., Kudlur, M., Dumoulin, V., and Shlens,
J. (2017). Exploring the structure of a real-time, arbi-
trary neural artistic stylization network.
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M.,
and Kagal, L. (2018). Explaining explanations: An
overview of interpretability of machine learning.
Gkitsas, V., Karakottas, A., Zioulis, N., Zarpalas, D., and
Daras, P. (2019). Restyling Data: Application to Un-
supervised Domain Adaptation.
Hesse, L. S., Kuling, G., Veta, M., and Martel, A. L. (2019).
Intensity augmentation for domain transfer of whole
breast segmentation in MRI.
Jackson, P. T., Atapour-Abarghouei, A., Bonner, S.,
Breckon, T., and Obara, B. (2018). Style Augmen-
tation: Data Augmentation via Style Randomization.
Ji, X., Henriques, J. F., and Vedaldi, A. (2018). Invariant
Information Clustering for Unsupervised Image Clas-
sification and Segmentation.
Jing, Y., Yang, Y., Feng, Z., Ye, J., Yu, Y., and Song, M.
(2017). Neural Style Transfer: A Review.
Kabir, H. M. D., Abdar, M., Jalali, S. M. J., Khosravi, A.,
Atiya, A. F., Nahavandi, S., and Srinivasan, D. (2020).
Spinalnet: Deep neural network with gradual input.
Kotovenko, Dmytro, adn Sanakoyeu, Artsiom, and Lang,
Sabine, and Ommer, B. (2019). Content and Style
Disentanglement for Artistic Style Transfer.
Li, X., Liu, S., Kautz, J., and Yang, M.-H. (2018). Learn-
ing Linear Transformations for Fast Arbitrary Style
Transfer.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R.,
Parikh, D., and Batra, D. (2017). Grad-cam: Visual
explanations from deep networks via gradient-based
localization. In Proceedings of the IEEE International
Conference on Computer Vision, pages 618–626.
Shrikumar, A., Greenside, P., and Kundaje, A. (2017).
Learning important features through propagating ac-
tivation differences. In Proceedings of the 34th In-
ternational Conference on Machine Learning-Volume
70, pages 3145–3153. JMLR. org.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2013).
Deep inside convolutional networks: Visualising im-
age classification models and saliency maps.
Sundararajan, M., Taly, A., and Yan, Q. (2017). Axiomatic
attribution for deep networks. In Proceedings of the
34th International Conference on Machine Learning-
Volume 70, pages 3319–3328. JMLR. org.
Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A.
(2016). Inception-v4, Inception-ResNet and the Im-
pact of Residual Connections on Learning.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna,
Z. (2015). Rethinking the Inception Architecture for
Computer Vision.
Tanaka, F. H. K. d. S. and Aranha, C. (2019). Data Aug-
mentation Using GANs.
Thoma, M. (2017). Analysis and Optimization of Convolu-
tional Neural Network Architectures.
Ulyanov, D., Vedaldi, A., and Lempitsky, V. (2017). Im-
proved Texture Networks: Maximizing Quality and
Diversity in Feed-forward Stylization and Texture
Synthesis.
Zagoruyko, S. and Komodakis, N. (2016). Wide Residual
Networks.
Zeiler, M. D. and Fergus, R. (2014). Visualizing and under-
standing convolutional networks. In European confer-
ence on computer vision, pages 818–833. Springer.
Zhao, J., Mathieu, M., Goroshin, R., and LeCun, Y. (2015).
Stacked What-Where Auto-encoders.
Zheng, X., Chalasani, T., Ghosal, K., Lutz, S., and Smolic,
A. (2019). STaDA: Style Transfer as Data Augmenta-
tion.
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Tor-
ralba, A. (2016). Learning deep features for discrim-
inative localization. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 2921–2929.
WSAM: Visual Explanations from Style Augmentation as Adversarial Attacker and Their Influence in Image Classification
837