
Hard, A., Rao, K., Mathews, R., Beaufays, F., Augenstein,
S., Eichner, H., Kiddon, C., and Ramage, D. (2018).
Federated learning for mobile keyboard prediction.
ArXiv, abs/1811.03604.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In 2016 IEEE con-
ference on computer vision and pattern recognition
(CVPR), Computer Vision and Pattern Recognition,
pages 770–778. IEEE.
Hu, S. X., Li, D., St
¨
uhmer, J., Kim, M., and Hospedales,
T. M. (2022). Pushing the limits of simple pipelines
for few-shot learning: External data and fine-tuning
make a difference. In IEEE/CVF conference on com-
puter vision and pattern recognition, CVPR 2022, new
orleans, LA, USA, june 18-24, 2022, Computer Vision
and Pattern Recognition, pages 9058–9067. IEEE.
Huang, W., Ye, M., Shi, Z., Li, H., and Du, B. (2023). Re-
thinking federated learning with domain shift: A pro-
totype view. In 2023 IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition (CVPR), pages
16312–16322, Los Alamitos, CA, USA. IEEE Com-
puter Society.
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Ben-
nis, M., Nitin Bhagoji, A., Bonawitz, K., Charles, Z.,
Cormode, G., Cummings, R., D’Oliveira, R. G. L.,
Eichner, H., El Rouayheb, S., Evans, D., Gardner, J.,
Garrett, Z., Gasc
´
on, A., Ghazi, B., Gibbons, P. B.,
Gruteser, M., Harchaoui, Z., He, C., He, L., Huo, Z.,
Hutchinson, B., Hsu, J., Jaggi, M., Javidi, T., Joshi,
G., Khodak, M., Konecn
´
y, J., Korolova, A., Koushan-
far, F., Koyejo, S., Lepoint, T., Liu, Y., Mittal, P.,
Mohri, M., Nock, R.,
¨
Ozg
¨
ur, A., Pagh, R., Qi, H., Ra-
mage, D., Raskar, R., Raykova, M., Song, D., Song,
W., Stich, S. U., Sun, Z., Suresh, A. T., Tram
`
er, F.,
Vepakomma, P., Wang, J., Xiong, L., Xu, Z., Yang, Q.,
Yu, F. X., Yu, H., and Zhao, S. (2021). Advances and
open problems in federated learning. Found. Trends
Mach. Learn., 14(1–2):1–210.
Kim, Y., Cho, D., Han, K., Panda, P., and Hong, S. (2021).
Domain adaptation without source data. IEEE Trans-
actions on Artificial Intelligence, 2(6):508–518.
Kim, Y., Oh, J., Kim, S., and Yun, S.-Y. (2022). How to
fine-tune models with few samples: Update, data aug-
mentation, and test-time augmentation.
Koh, P. W., Sagawa, S., Marklund, H., Xie, S. M., Zhang,
M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips,
R. L., Gao, I., Lee, T., David, E., Stavness, I., Guo, W.,
Earnshaw, B. A., Haque, I. S., Beery, S., Leskovec,
J., Kundaje, A., Pierson, E., Levine, S., Finn, C., and
Liang, P. (2021). WILDS: A benchmark of in-the-
Wild distribution shifts. In Meila, M. and 0001, T. Z.,
editors, International conference on machine learning
(ICML), volume 139 of International Conference on
Machine Learning, pages 5637–5664. PMLR.
Kurmi, V. K., Subramanian, V. K., and Nambood-
iri, V. P. (2021). Domain impression: A source
data free domain adaptation method. CoRR,
abs/2102.09003:615–625.
Laier, N. and Laier, J. (2023). WeSort.AI homepage. https:
//www.wesort.ai/. Accessed: 2023-10-24.
Lange, J.-P. (2021). Managing plastic waste-sorting, recy-
cling, disposal, and product redesign. ACS Sustain-
able Chemistry & Engineering, 9(47):15722–15738.
Li, W., Liu, X., and Bilen, H. (2022). Cross-domain few-
shot learning with task-specific adapters. In 2022
IEEE/CVF conference on computer vision and pat-
tern recognition (CVPR), Computer Vision and Pat-
tern Recognition, pages 7151–7160, Los Alamitos,
CA, USA. IEEE Computer Society.
Liang, J., Hu, D., and Feng, J. (2020). Do we really need to
access the source data? Source hypothesis transfer for
unsupervised domain adaptation. In Proceedings of
the 37th international conference on machine learn-
ing, ICML’20, pages 6028–6039. JMLR.org.
McMahan, B., Moore, E., Ramage, D., Hampson, S., and
Arcas, B. A. y. (2017). Communication-efficient
learning of deep networks from decentralized data. In
Singh, A. and Zhu, J., editors, Proceedings of the 20th
international conference on artificial intelligence and
statistics, volume 54 of Proceedings of machine learn-
ing research, pages 1273–1282.
M
¨
uller, R., Kornblith, S., and Hinton, G. (2019). When does
label smoothing help? In Wallach, H. M., Larochelle,
H., Beygelzimer, A., d’Alch
´
e Buc, F., Fox, E. A., and
Garnett, R., editors, Neural Information Processing
Systems, pages 4696–4705. Curran Associates Inc.,
Red Hook, NY, USA.
Nado, Z., Padhy, S., Sculley, D., D’Amour, A., Laksh-
minarayanan, B., and Snoek, J. (2021). Evaluat-
ing prediction-time batch normalization for robustness
under covariate shift.
Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., and
Wang, B. (2019). Moment matching for multi-source
domain adaptation. In Proceedings of the IEEE in-
ternational conference on computer vision, IEEE In-
ternational Conference on Computer Vision, pages
1406–1415. IEEE.
Perez, L. and Wang, J. (2017). The effectiveness of data
augmentation in image classification using deep learn-
ing.
Raab, C., R
¨
oder, M., and Schleif, F.-M. (2022). Domain ad-
versarial tangent subspace alignment for explainable
domain adaptation. Neurocomputing, 506:418–429.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh,
S., Ma, S., Huang, Z., Karpathy, A., Khosla, A.,
Bernstein, M., Berg, A. C., and Fei-Fei, L. (2015).
ImageNet large scale visual recognition challenge.
International Journal of Computer Vision (IJCV),
115(3):211–252.
Saenko, K., Kulis, B., Fritz, M., and Darrell, T. (2010).
Adapting visual category models to new domains. In
Daniilidis, K., Maragos, P., and Paragios, N., editors,
Computer vision – ECCV 2010, volume 6314 of Euro-
pean Conference on Computer Vision, pages 213–226,
Berlin, Heidelberg. Springer Berlin Heidelberg.
Siqueira, F. and Davis, J. G. (2021). Service computing for
industry 4.0: State of the art, challenges, and research
opportunities. Acm Computing Surveys, 54(9):1–38.
Snell, J., Swersky, K., and Zemel, R. (2017). Prototyp-
ical networks for few-shot learning. In Guyon, I.,
ICPRAM 2024 - 13th International Conference on Pattern Recognition Applications and Methods
520