
Lee, J., Jung, D., Yim, J., and Yoon, S. (2022). Confidence
score for source-free unsupervised domain adaptation.
In International Conference on Machine Learning,
pages 12365–12377. PMLR.
Liu, X., Yoo, C., Xing, F., Oh, H., El Fakhri, G., Kang, J.-
W., Woo, J., et al. (2022). Deep unsupervised domain
adaptation: A review of recent advances and perspec-
tives. APSIPA Transactions on Signal and Information
Processing, 11(1).
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin,
S., and Guo, B. (2021). Swin transformer: Hierar-
chical vision transformer using shifted windows. In
Proceedings of the IEEE/CVF international confer-
ence on computer vision, pages 10012–10022.
Long, M., Cao, Y., Wang, J., and Jordan, M. (2015). Learn-
ing transferable features with deep adaptation net-
works. In International conference on machine learn-
ing, pages 97–105. PMLR.
Luo, B., Feng, Y., Wang, Z., Zhu, Z., Huang, S., Yan, R.,
and Zhao, D. (2017). Learning with noise: Enhance
distantly supervised relation extraction with dynamic
transition matrix. arXiv preprint arXiv:1705.03995.
MacQueen, J. et al. (1967). Some methods for classification
and analysis of multivariate observations. In Proceed-
ings of the fifth Berkeley symposium on mathematical
statistics and probability, volume 1, pages 281–297.
Oakland, CA, USA.
Mahapatra, D., Ge, Z., and Reyes, M. (2022). Self-
supervised generalized zero shot learning for med-
ical image classification using novel interpretable
saliency maps. IEEE Transactions on Medical Imag-
ing, 41(9):2443–2456.
Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang,
D., and Saenko, K. (2017). Visda: The vi-
sual domain adaptation challenge. arXiv preprint
arXiv:1710.06924.
Prabhu, V., Khare, S., Kartik, D., and Hoffman, J. (2021).
Sentry: Selective entropy optimization via committee
consistency for unsupervised domain adaptation. In
Proceedings of the IEEE/CVF International Confer-
ence on Computer Vision, pages 8558–8567.
Ren, C.-X., Liu, Y.-H., Zhang, X.-W., and Huang, K.-K.
(2022). Multi-source unsupervised domain adaptation
via pseudo target domain. IEEE Transactions on Im-
age Processing, 31:2122–2135.
Saenko, K., Kulis, B., Fritz, M., and Darrell, T. (2010).
Adapting visual category models to new domains.
In Computer Vision–ECCV 2010: 11th European
Conference on Computer Vision, Heraklion, Crete,
Greece, September 5-11, 2010, Proceedings, Part IV
11, pages 213–226. Springer.
Saito, K., Ushiku, Y., and Harada, T. (2017). Asymmetric
tri-training for unsupervised domain adaptation. In In-
ternational Conference on Machine Learning, pages
2988–2997. PMLR.
Saito, K., Watanabe, K., Ushiku, Y., and Harada, T. (2018).
Maximum classifier discrepancy for unsupervised do-
main adaptation. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 3723–3732.
Sun, B. and Saenko, K. (2016). Deep coral: Correla-
tion alignment for deep domain adaptation. In Com-
puter Vision–ECCV 2016 Workshops: Amsterdam,
The Netherlands, October 8-10 and 15-16, 2016, Pro-
ceedings, Part III 14, pages 443–450. Springer.
Sun, T., Lu, C., Zhang, T., and Ling, H. (2022). Safe
self-refinement for transformer-based domain adapta-
tion. In Proceedings of the IEEE/CVF conference on
computer vision and pattern recognition, pages 7191–
7200.
Tarvainen, A. and Valpola, H. (2017). Mean teachers are
better role models: Weight-averaged consistency tar-
gets improve semi-supervised deep learning results.
Advances in neural information processing systems,
30.
Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles,
A., and J
´
egou, H. (2021). Training data-efficient im-
age transformers & distillation through attention. In
International conference on machine learning, pages
10347–10357. PMLR.
Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T. (2017).
Adversarial discriminative domain adaptation. In Pro-
ceedings of the IEEE conference on computer vision
and pattern recognition, pages 7167–7176.
Venkateswara, H., Eusebio, J., Chakraborty, S., and Pan-
chanathan, S. (2017). Deep hashing network for
unsupervised domain adaptation. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 5018–5027.
Xie, B., Li, S., Lv, F., Liu, C. H., Wang, G., and Wu,
D. (2022). A collaborative alignment framework of
transferable knowledge extraction for unsupervised
domain adaptation. IEEE Transactions on Knowledge
and Data Engineering.
Xu, T., Chen, W., Wang, P., Wang, F., Li, H., and
Jin, R. (2021). Cdtrans: Cross-domain transformer
for unsupervised domain adaptation. arXiv preprint
arXiv:2109.06165.
Yang, J., Liu, J., Xu, N., and Huang, J. (2023). Tvt: Trans-
ferable vision transformer for unsupervised domain
adaptation. In Proceedings of the IEEE/CVF Win-
ter Conference on Applications of Computer Vision,
pages 520–530.
Zhang, C. and Lee, G. H. (2022). Ca-uda: Class-aware
unsupervised domain adaptation with optimal assign-
ment and pseudo-label refinement. arXiv preprint
arXiv:2205.13579.
Zhang, Y., Liu, T., Long, M., and Jordan, M. (2019). Bridg-
ing theory and algorithm for domain adaptation. In
International conference on machine learning, pages
7404–7413. PMLR.
Zhu, J., Bai, H., and Wang, L. (2023). Patch-mix trans-
former for unsupervised domain adaptation: A game
perspective. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition,
pages 3561–3571.
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
432