
to other domain-specific classification tasks, cross-
domain experiments, and diverse dataset characteris-
tics will be important for generalizing these results.
Future research should extend beyond the current
scope by exploring several promising avenues. First,
including Vision Transformer (ViT) models, or fine-
tuning large pretrained models (Abou Baker et al.,
2024) would provide insight into how newer archi-
tectural paradigms perform in transfer learning sce-
narios. Second, developing more advanced hyperpa-
rameter optimization techniques could further refine
model selection strategies. Third, expanding the di-
versity of datasets to include more domain-specific
and cross-domain challenges would test the general-
izability of our findings. In addition, exploring the in-
teraction between transferability metrics and emerg-
ing techniques such as few-shot learning could pro-
vide new approaches for efficient machine learning
model adaptation.
In conclusion, effective transferability metrics
must balance speed and accuracy to identify appro-
priate pretrained models without extensive finetuning.
This research contributes to a deeper understanding of
transferability in deep learning, providing a founda-
tion for broader evaluations and practical guidance in
waste classification and beyond.
ACKNOWLEDGEMENTS
This work has been funded by the Ministry of Econ-
omy, Innovation, Digitization, and Energy of the
State of North Rhine-Westphalia, Germany, within
the project Digital.Zirkulär.Ruhr.
REFERENCES
Abou Baker, N. and Handmann, U. (2024). One size does
not fit all in evaluating model selection scores for im-
age classification. Scientific Reports, 14(1):30239.
Abou Baker, N., Rohrschneider, D., and Handmann,
U. (2024). Parameter-efficient fine-tuning of
large pretrained models for instance segmentation
tasks. Machine Learning and Knowledge Extraction,
6(4):2783–2807.
Abou Baker, N., Stehr, J., and Handmann, U. (2023). E-
waste recycling gets smarter with digitalization. In
2023 IEEE Conference on Technologies for Sustain-
ability (SusTech), pages 205–209.
Abou Baker, N., Zengeler, N., and Handmann, U. (2022).
A transfer learning evaluation of deep neural net-
works for image classification. Machine Learning and
Knowledge Extraction, 4(1):22–41.
Achille, A., Lam, M., Tewari, R., Ravichandran, A., Maji,
S., Fowlkes, C., Soatto, S., and Perona, P. (2019).
Task2vec: Task embedding for meta-learning. In
ICCV 2019.
Agostinelli, A., Pándy, M., Uijlings, J., Mensink, T., and
Ferrari, V. (2022). How stable are transferability
metrics evaluations? In Computer Vision – ECCV
2022: 17th European Conference, Tel Aviv, Israel, Oc-
tober 23–27, 2022, Proceedings, Part XXXIV, page
303–321, Berlin, Heidelberg. Springer-Verlag.
Bolya, D., Mittapalli, R., and Hoffman, J. (2021). Scalable
diverse model selection for accessible transfer learn-
ing. In Neural Information Processing Systems.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-
Fei, L. (2009). Imagenet: A large-scale hierarchical
image database. In 2009 IEEE Conference on Com-
puter Vision and Pattern Recognition, pages 248–255.
GarbageFine (2023). Garbage dataset. https://www.
kaggle.com/datasets/mrk1903/garbage. Ac-
cessed: 2024-09-27.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In 2016 IEEE Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 770–778.
Huang, G., Liu, Z., and Weinberger, K. Q. (2016). Densely
connected convolutional networks. 2017 IEEE Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 2261–2269.
Huang, L.-K., Wei, Y., Rong, Y., Yang, Q., and Huang, J.
(2021). Frustratingly easy transferability estimation.
In International Conference on Machine Learning.
Kornblith, S., Shlens, J., and Le, Q. V. (2018). Do better im-
agenet models transfer better? 2019 IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 2656–2666.
Nguyen, C. V., Hassner, T., Archambeau, C., and Seeger,
M. W. (2020). Leep: A new measure to evaluate trans-
ferability of learned representations. In International
Conference on Machine Learning.
P’andy, M., Agostinelli, A., Uijlings, J. R. R., Ferrari, V.,
and Mensink, T. (2021). Transferability estimation us-
ing bhattacharyya class separability. 2022 IEEE/CVF
Conference on Computer Vision and Pattern Recogni-
tion (CVPR), pages 9162–9172.
Renggli, C., Pinto, A. S., Rimanic, L., Puigcerver, J.,
Riquelme, C., Zhang, C., and Lucic, M. (2020).
Which model to transfer? finding the needle in the
growing haystack. 2022 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR),
pages 9195–9204.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and
Chen, L.-C. (2018). Mobilenetv2: Inverted residu-
als and linear bottlenecks. In 2018 IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition,
pages 4510–4520.
Statista (2023). Global waste generation: statistics and
facts. https://www.statista.com/topics/4983/
waste-generation-worldwide/topicOverview.
Accessed: 2024-09-27.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S.,
Anguelov, D., Erhan, D., Vanhoucke, V., and Rabi-
novich, A. (2015). Going deeper with convolutions. In
Rethinking Model Selection Beyond ImageNet Accuracy for Waste Classification
233