REFERENCES
Bottou, L. (2010). Large-scale machine learning with sto-
chastic gradient descent. In Proceedings of COMP-
STAT’2010, pages 177–186. Springer.
Caruana, R., Lawrence, S., and Giles, C. L. (2001). Over-
fitting in neural nets: Backpropagation, conjugate gra-
dient, and early stopping. In Advances in neural infor-
mation processing systems, pages 402–408.
Chalasani, T., Ondrej, J., and Smolic, A. (2018). Egocentric
gesture recognition for head mounted ar devices. In
Adjunct Proceedings of the IEEE International Sym-
posium for Mixed and Augmented Reality 2018 (To
appear).
Chen, Y.-L. and Hsu, C.-T. (2016). Towards deep style
transfer: A content-aware perspective. In BMVC.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,
L. (2009). Imagenet: A large-scale hierarchical image
database. In Computer Vision and Pattern Recogni-
tion, 2009. CVPR 2009. IEEE Conference on, pages
248–255. Ieee.
Dumoulin, V., Shlens, J., and Kudlur, M. (2016). A lear-
ned representation for artistic style. arXiv e-prints,
abs/1610.07629.
Engstrom, L. (2016). Fast style transfer. https://github.com/
lengstrom/fast-style-transfer/.
Fei-Fei, L., Fergus, R., and Perona, P. (2006). One-shot
learning of object categories. IEEE transactions on
pattern analysis and machine intelligence, 28(4):594–
611.
Gatys, L., Ecker, A. S., and Bethge, M. (2015). Texture
synthesis using convolutional neural networks. In Ad-
vances in Neural Information Processing Systems, pa-
ges 262–270.
Gatys, L. A., Ecker, A. S., and Bethge, M. (2016). Image
style transfer using convolutional neural networks. In
Computer Vision and Pattern Recognition (CVPR),
2016 IEEE Conference on, pages 2414–2423. IEEE.
Griffin, G., Holub, A., and Perona, P. (2007). Caltech-256
object category dataset.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resi-
dual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Jing, Y., Yang, Y., Feng, Z., Ye, J., and Song, M.
(2017). Neural style transfer: A review. CoRR,
abs/1705.04058.
Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual
losses for real-time style transfer and super-resolution.
In European Conference on Computer Vision, pages
694–711. Springer.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012).
Imagenet classification with deep convolutional neu-
ral networks. In Advances in neural information pro-
cessing systems, pages 1097–1105.
Kyprianidis, J. E., Collomosse, J., Wang, T., and Isenberg,
T. (2013). State of the” art: A taxonomy of artistic
stylization techniques for images and video. IEEE
transactions on visualization and computer graphics,
19(5):866–885.
Li, C. and Wand, M. (2016). Precomputed real-time texture
synthesis with markovian generative adversarial net-
works. In European Conference on Computer Vision,
pages 702–716. Springer.
Li, Y., Wang, N., Liu, J., and Hou, X. (2017). De-
mystifying neural style transfer. arXiv preprint
arXiv:1701.01036.
Lin, T., Maire, M., Belongie, S. J., Bourdev, L. D., Girshick,
R. B., Hays, J., Perona, P., Ramanan, D., Doll
´
ar, P.,
and Zitnick, C. L. (2014). Microsoft COCO: common
objects in context. CoRR, abs/1405.0312.
Mahendran, A. and Vedaldi, A. (2015). Understanding deep
image representations by inverting them. In Procee-
dings of the IEEE conference on computer vision and
pattern recognition, pages 5188–5196.
Perez, L. and Wang, J. (2017). The effectiveness of data
augmentation in image classification using deep lear-
ning. arXiv preprint arXiv:1712.04621.
Risser, E., Wilmot, P., and Barnes, C. (2017). Sta-
ble and controllable neural texture synthesis and
style transfer using histogram losses. arXiv preprint
arXiv:1701.08893.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh,
S., Ma, S., Huang, Z., Karpathy, A., Khosla, A.,
Bernstein, M., Berg, A. C., and Fei-Fei, L. (2015).
ImageNet Large Scale Visual Recognition Challenge.
International Journal of Computer Vision (IJCV),
115(3):211–252.
Sam, G. and Michael, W. (2016). Training and investigating
residual nets. http://torch.ch/blog/2016/02/04/resnets.
html.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Ulyanov, D., Lebedev, V., Vedaldi, A., and Lempitsky, V. S.
(2016a). Texture networks: Feed-forward synthesis of
textures and stylized images. In ICML, pages 1349–
1357.
Ulyanov, D., Vedaldi, A., and Lempitsky, V. S. (2016b). In-
stance normalization: The missing ingredient for fast
stylization. CoRR, abs/1607.08022.
Vasconcelos, C. N. and Vasconcelos, B. N. (2017). Incre-
asing deep learning melanoma classification by clas-
sical and expert knowledge based image transforms.
CoRR, abs/1702.07025, 1.
Yin, R. (2016). Content aware neural style transfer. arXiv
preprint arXiv:1601.04568.
Zeiler, M. D. and Fergus, R. (2014). Visualizing and under-
standing convolutional networks. In European confe-
rence on computer vision, pages 818–833. Springer.
Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. (2017).
Unpaired image-to-image translation using cycle-
consistent adversarial networks. arXiv preprint.
VISAPP 2019 - 14th International Conference on Computer Vision Theory and Applications
114