Authors:
Felipe Moreno-Vera
1
;
Edgar Medina
2
and
Jorge Poco
1
Affiliations:
1
Fundação Getúlio Vargas, Rio de Janeiro, Brazil
;
2
QualityMinds, Munich, Germany
Keyword(s):
Style Augmentation, Adversarial Attack, Understanding, Style, Convolutional Networks, Explanation, Interpretability, Domain Adaptation, Image Classification, Model Explanation, Model Interpretation.
Abstract:
Currently, style augmentation is capturing attention due to convolutional neural networks (CNN) being strongly biased toward recognizing textures rather than shapes. Most existing styling methods either perform a low-fidelity style transfer or a weak style representation in the embedding vector. This paper outlines a style augmentation algorithm using stochastic-based sampling with noise addition for randomization improvement on a general linear transformation for style transfer. With our augmentation strategy, all models not only present incredible robustness against image stylizing but also outperform all previous methods and surpass the state-of-the-art performance for the STL-10 dataset. In addition, we present an analysis of the model interpretations under different style variations. At the same time, we compare comprehensive experiments demonstrating the performance when applied to deep neural architectures in training settings.