
tions to follow when performing fine-tuning for dif-
fusion model-assisted data augmentation and how to
combine it with state-of-the-art DA models. We ob-
serve that we can generate additional synthetic data
that captures the target domain for each class and im-
proves model accuracies over their non-augmented
counterparts. Our approach is model-agnostic and
easy-to-implement, converting the Few-shot problem
into a standard problem of DA. While not all DA
models (e.g., UAN) may benefit from this approach,
other methods do show an improvement in their av-
erage and per-class accuracy (e.g., NUDA, CDAN),
showcasing the prospective application of this tech-
nique to real-world scenarios. This is the first work
that, to our knowledge, has considered this combina-
tion of these methods to address multi-class classifi-
cation.
Future work could consider the open-set and par-
tial open-set cases for few-shot scenarios to study
how the presence of potentially bad synthetic samples
may affect the accuracy in the presence of unknown
classes, as well as a more exhaustive study on the
trade-off between the single-model and triple-model
fine tuning strategies.
ACKNOWLEDGEMENTS
The following research was financed with a fel-
lowship from the Japan International Cooperation
Agency (JICA), as part of its Japan-Mexico Training
Program for the Strategic Global Partnership 2023.
We would also like to thank the Graduate School of
Information and Engineering at Ritsumeikan Univer-
sity Campus for its kind support.
REFERENCES
Bashkirova, D., Mishra, S., Lteif, D., Teterwak, P., Kim, D.,
Alladkani, F., Akl, J., Calli, B., Bargal, S. A., Saenko,
K., Kim, D., Seo, M., Jeon, Y., Choi, D.-G., Ettedgui,
S., Giryes, R., Abu-Hussein, S., Xie, B., and Li, S.
(2023). Visda 2022 challenge: Domain adaptation for
industrial waste sorting.
Benigmim, Y., Roy, S., Essid, S., Kalogeiton, V., and Lath-
uili
`
ere, S. (2023). One-shot unsupervised domain
adaptation with personalized diffusion models. arXiv
preprint arXiv:2303.18080.
Cao, Z., Ma, L., Long, M., and Wang, J. (2018). Partial
adversarial domain adaptation. In Proceedings of the
European Conference on Computer Vision (ECCV).
Cao, Z., You, K., Long, M., Wang, J., and Yang, Q. (2019).
Learning to transfer examples for partial domain adap-
tation. In 2019 IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), pages 2980–
2989.
Gal, R., Alaluf, Y., Atzmon, Y., Patashnik, O., Bermano,
A. H., Chechik, G., and Cohen-Or, D. (2022). An
image is worth one word: Personalizing text-to-image
generation using textual inversion.
Ganin, Y. and Lempitsky, V. (2015). Unsupervised do-
main adaptation by backpropagation. In Bach, F. and
Blei, D., editors, Proceedings of the 32nd Interna-
tional Conference on Machine Learning, volume 37
of Proceedings of Machine Learning Research, pages
1180–1189, Lille, France. PMLR.
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P.,
Larochelle, H., Laviolette, F., Marchand, M., and
Lempitsky, V. (2016). Domain-adversarial training of
neural networks. 17(1):2096–2030.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In Ghahra-
mani, Z., Welling, M., Cortes, C., Lawrence, N., and
Weinberger, K., editors, Advances in Neural Infor-
mation Processing Systems, volume 27. Curran Asso-
ciates, Inc.
Li, J., Chen, E., Ding, Z., Zhu, L., Lu, K., and Shen,
H. T. (2021). Maximum density divergence for do-
main adaptation. IEEE Transactions on Pattern Anal-
ysis and Machine Intelligence, 43(11):3918–3930.
Liu, X., Yoo, C., Xing, F., Oh, H., Fakhri, G., Kang, J.-W.,
and Woo, J. (2022). Deep unsupervised domain adap-
tation: A review of recent advances and perspectives.
APSIPA Transactions on Signal and Information Pro-
cessing.
Long, M., Cao, Y., Wang, J., and Jordan, M. (2015). Learn-
ing transferable features with deep adaptation net-
works. In Bach, F. and Blei, D., editors, Proceed-
ings of the 32nd International Conference on Ma-
chine Learning, volume 37 of Proceedings of Machine
Learning Research, pages 97–105, Lille, France.
PMLR.
Long, M., CAO, Z., Wang, J., and Jordan, M. I. (2018).
Conditional adversarial domain adaptation. In Bengio,
S., Wallach, H., Larochelle, H., Grauman, K., Cesa-
Bianchi, N., and Garnett, R., editors, Advances in
Neural Information Processing Systems, volume 31.
Curran Associates, Inc.
Motiian, S., Jones, Q., Iranmanesh, S. M., and Doretto,
G. (2017). Few-shot adversarial domain adaptation.
In Proceedings of the 31st International Conference
on Neural Information Processing Systems, NIPS’17,
page 6673–6683, Red Hook, NY, USA. Curran Asso-
ciates Inc.
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Om-
mer, B. (2021). High-resolution image synthesis with
latent diffusion models.
Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., and
Aberman, K. (2022). Dreambooth: Fine tuning text-
to-image diffusion models for subject-driven genera-
tion.
Saito, K., Kim, D., Sclaroff, S., and Saenko, K. (2020). Uni-
versal domain adaptation through self supervision. In
ICPRAM 2024 - 13th International Conference on Pattern Recognition Applications and Methods
264