Figure 4: Results (good classification rates) obtained on the
USPS data set as a function of the gaussian kernel parame-
ter.
We independently generate 50 different banana-
orange data sets and show the average performance
(±1 standard deviation) in fig. 2. We conclude that
most of the time our method achieves better results
than LM for a wider range of the kernel parameter
value.
We now show the results obtained on the USPS
data set. As shown in fig. 4, our method provides
higher performance for almost all the kernel parame-
ter values considered.
6 CONCLUSION AND FUTURE
DIRECTIONS
In this paper, we propose a new approach to solve
the domain adaptation problem when no labeled tar-
get data is available. The idea is to perform a pro-
jection of source and target data onto a subspace of a
RKHS where source and target data distributions are
expected to be similar. To do so, we select the sub-
space which ensures nullity of a Maximum Mean Dis-
crepancy based criterion. As source and target data
become similar, the SVM classifier trained on source
data performs well on target data. We have shown that
this additional constraint on the primal optimization
problem does not modify the nature of the dual prob-
lem so that standard quadratic programming tools can
be used. We have applied our method on synthetic
and real data sets and we have shown that our results
compare favorably with Large Margin Transductive
Transfer Learning.
As an important short term development, we must
propose a method to automatically determine an ade-
quate value of the gaussian kernel parameter used in
our paper. We also have to consider multiple kernel
learning. Finally, more complex real data sets are to
be used to benchmark our transfer learning method.
REFERENCES
Blitzer, J., Dredze, M., Pereira, F., et al. (2007). Biogra-
phies, bollywood, boom-boxes and blenders: Domain
adaptation for sentiment classification. In ACL, vol-
ume 7, pages 440–447.
Bruzzone, L. and Marconcini, M. (2010). Domain adapta-
tion problems: A dasvm classification technique and a
circular validation strategy. Pattern Analysis and Ma-
chine Intelligence, IEEE Transactions on, 32(5):770–
787.
Dudley, R. M. (1984). A course on empirical processes. In
Ecole d’´et´e de Probabilit´es de Saint-Flour XII-1982,
pages 1–142. Springer.
Dudley, R. M. (2002). Real analysis and probability, vol-
ume 74. Cambridge University Press.
Fortet, R. and Mourier, E. (1953). Convergence de la
r´epartition empirique vers la r´eparation th´eorique.
Ann. Scient.
´
Ecole Norm. Sup., pages 266–285.
Gretton, A., Borgwardt, K. M., Rasch, M. J., Sch¨olkopf, B.,
and Smola, A. (2012). A kernel two-sample test. J.
Mach. Learn. Res., 13:723–773.
Huang, C.-H., Yeh, Y.-R., and Wang, Y.-C. F. (2012).
Recognizing actions across cameras by exploring the
correlated subspace. In Computer Vision–ECCV
2012. Workshops and Demonstrations, pages 342–
351. Springer.
Huang, J., Gretton, A., Borgwardt, K. M., Sch¨olkopf, B.,
and Smola, A. J. (2006). Correcting sample selection
bias by unlabeled data. In Advances in neural infor-
mation processing systems, pages 601–608.
Jiang, J. (2008). A literature survey on domain adaptation
of statistical classifiers. URL: http://sifaka. cs. uiuc.
edu/jiang4/domainadaptation/survey.
Li, L., Zhou, K., Xue, G.-R., Zha, H., and Yu, Y.
(2011). Video summarization via transferrable struc-
tured learning. In Proceedings of the 20th interna-
tional conference on World wide web, pages 287–296.
ACM.
Liang, F., Tang, S., Zhang, Y., Xu, Z., and Li, J. (2014).
Pedestrian detection based on sparse coding and
transfer learning. Machine Vision and Applications,
25(7):1697–1709.
Pan, S. J., Tsang, I. W., Kwok, J. T., and Yang, Q. (2011).
Domain adaptation via transfer component analysis.
Neural Networks, IEEE Transactions on, 22(2):199–
210.
Pan, S. J. and Yang, Q. (2010). A survey on transfer learn-
ing. Knowledge and Data Engineering, IEEE Trans-
actions on, 22(10):1345–1359.
Patel, V. M., Gopalan, R., Li, R., and Chellappa, R. (2015).
Visual domain adaptation: A survey of recent ad-
vances. IEEE signal processing magazine, 32(3):53–
69.
Paulsen, V. I. (2009). An introduction to the theory of re-
producing kernel hilbert spaces. Lecture Notes.
Quanz, B. and Huan, J. (2009). Large margin transductive
transfer learning. In Proceedings of the 18th ACM
conference on Information and knowledge manage-
ment, pages 1327–1336. ACM.