domain adaptation methods based on feature transfor-
mation, like (Fernando et al., 2013; Sun et al., 2016).
REFERENCES
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z.,
Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin,
M., Ghemawat, S., Goodfellow, I., Harp, A., Irving,
G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kud-
lur, M., Levenberg, J., Man
´
e, D., Monga, R., Moore,
S., Murray, D., Olah, C., Schuster, M., Shlens, J.,
Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Van-
houcke, V., Vasudevan, V., Vi
´
egas, F., Vinyals, O.,
Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and
Zheng, X. (2015). TensorFlow: Large-scale machine
learning on heterogeneous systems. Software avail-
able from tensorflow.org.
Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A.,
Pereira, F., and Vaughan, J. W. (2010). A theory of
learning from different domains. Machine learning,
79(1):151–175.
Ben-David, S., Blitzer, J., Crammer, K., and Pereira, F.
(2007). Analysis of representations for domain adap-
tation. In Advances in Neural Information Processing
Systems, pages 137–144.
Chollet, F. (2016). Xception: Deep learning with
depthwise separable convolutions. arXiv preprint
arXiv:1610.02357.
Chollet, F. et al. (2015). Keras. https://github.com/fchollet/
keras.
Csurka, G. (2017). Domain adaptation for visual appli-
cations: A comprehensive survey. arXiv preprint
arXiv:1702.05374.
Fernando, B., Habrard, A., Sebban, M., and Tuytelaars, T.
(2013). Unsupervised visual domain adaptation using
subspace alignment. In Proceedings of the 2013 IEEE
International Conference on Computer Vision, ICCV
’13, pages 2960–2967, Washington, DC, USA. IEEE
Computer Society.
Ganin, Y. and Lempitsky, V. (2015). Unsupervised domain
adaptation by backpropagation. In International Con-
ference on Machine Learning, pages 1180–1189.
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P.,
Larochelle, H., Laviolette, F., Marchand, M., and
Lempitsky, V. (2016). Domain-adversarial training of
neural networks. Journal of Machine Learning Re-
search, 17(59):1–35.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In Ad-
vances in Neural Information Processing Systems,
pages 2672–2680.
Gretton, A., Smola, A. J., Huang, J., Schmittfull, M., Borg-
wardt, K. M., and Sch
¨
olkopf, B. (2009). Covariate
shift by kernel mean matching. In Joaquin Quinonero-
Candela, Masashi Sugiyama, A. S. N. D. L., editor,
Dataset Shift in Machine Learning, pages 131–160.
MIT press.
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D.,
Wang, W., Weyand, T., Andreetto, M., and Adam,
H. (2017). Mobilenets: Efficient convolutional neu-
ral networks for mobile vision applications. arXiv
preprint arXiv:1704.04861.
Huang, G., Liu, Z., van der Maaten, L., and Weinberger,
K. Q. (2017). Densely connected convolutional net-
works. In 2017 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), pages 2261–2269.
Long, M., Cao, Y., Wang, J., and Jordan, M. (2015).
Learning transferable features with deep adaptation
networks. In International Conference on Machine
Learning, pages 97–105.
Long, M., Wang, J., and Jordan, M. I. (2016a). Deep trans-
fer learning with joint adaptation networks. arXiv
preprint arXiv:1605.06636.
Long, M., Zhu, H., Wang, J., and Jordan, M. I. (2016b).
Unsupervised domain adaptation with residual trans-
fer networks. In Advances in Neural Information Pro-
cessing Systems, pages 136–144.
Maaten, L. v. d. and Hinton, G. (2008). Visualizing data
using t-sne. Journal of Machine Learning Research,
9(Nov):2579–2605.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S.,
Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bern-
stein, M., et al. (2015). Imagenet large scale visual
recognition challenge. International Journal of Com-
puter Vision, 115(3):211–252.
Saenko, K., Kulis, B., Fritz, M., and Darrell, T. (2010).
Adapting visual category models to new domains.
Computer Vision–ECCV 2010, pages 213–226.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I.,
and Salakhutdinov, R. (2014). Dropout: a simple way
to prevent neural networks from overfitting. Journal
of machine learning research, 15(1):1929–1958.
Sun, B., Feng, J., and Saenko, K. (2016). Return of frus-
tratingly easy domain adaptation. In Thirtieth AAAI
Conference on Artificial Intelligence.
Sun, B. and Saenko, K. (2016). Deep coral: Correla-
tion alignment for deep domain adaptation. In Com-
puter Vision–ECCV 2016 Workshops, pages 443–450.
Springer.
Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A.
(2017). Inception-v4, inception-resnet and the impact
of residual connections on learning. In AAAI, pages
4278–4284.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wo-
jna, Z. (2016). Rethinking the inception architecture
for computer vision. In Proceedings of the IEEE Con-
ference on Computer Vision and Pattern Recognition,
pages 2818–2826.
Tommasi, T. and Tuytelaars, T. (2014). A testbed for cross-
dataset analysis. In European Conference on Com-
puter Vision, pages 18–31. Springer.
Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T. (2017).
Adversarial discriminative domain adaptation. In
ICPRAM 2019 - 8th International Conference on Pattern Recognition Applications and Methods
230