
Figure 5: Evaluation of the classification performance of
samples to noisy vs clean using variable k. The reported
metric is the F1-score.
Our approach can be easily plugged in every training
pipeline as a preceding step before loss computation
and possibly triplet mining, without e.g. requiring ar-
chitectural changes and complex schemes. The evalu-
ation in a range of experimental settings demonstrated
that our approach performed better or on par with the
competition, even without the hard requirement about
knowing the noise rate of the dataset.
In future work we will examine more modern
model architectures and other types of noise. We
will also examine re-labeling approaches such that we
can avail from all existing samples without neglecting
them and possibly noise elimination at the level of the
dataset instead of at the level of the batch.
ACKNOWLEDGEMENTS
This work was implemented under the project Craeft
which received funding from the European Union’s
Horizon Europe research and innovation program un-
der grant agreement No 101094349 and was sup-
ported by the Hellenic Foundation for Research and
Innovation (HFRI) under the “1st Call for H.F.R.I Re-
search Projects to support Faculty members and Re-
searchers and the procurement of high-cost research
equipment”, project I.C.Humans, number 91.
REFERENCES
Cai, B., Xiong, P., and Tian, S. (2023). Center con-
trastive loss for metric learning. arXiv preprint
arXiv:2308.00458.
Clanuwat, T., Bober-Irizar, M., Kitamoto, A., Lamb, A.,
Yamamoto, K., and Ha, D. (2018). Deep learn-
ing for classical japanese literature. arXiv preprint
arXiv:1812.01718.
Deng, J., Guo, J., Liu, T., Gong, M., and Zafeiriou, S. Sub-
center arcface: Boosting face recognition by large-
scale noisy web faces. In ECCV.
Deng, J., Guo, J., Xue, N., and Zafeiriou, S. (2019). Ar-
cface: Additive angular margin loss for deep face
recognition. In CVPR.
Falcon, W. A. (2019). Pytorch lightning. GitHub, 3.
Feng, C., Tzimiropoulos, G., and Patras, I. (2022). Ssr:
An efficient and robust framework for learning with
unknown label noise. In BMVC.
Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang,
I., and Sugiyama, M. (2018). Co-teaching: Robust
training of deep neural networks with extremely noisy
labels. In NIPS.
Ibrahimi, S., Sors, A., de Rezende, R. S., and Clinchant, S.
(2022). Learning with label noise for image retrieval
by selecting interactions. In WACV.
Johnson, J., Douze, M., and J
´
egou, H. (2019). Billion-scale
similarity search with GPUs. IEEE Transactions on
Big Data, 7(3):535–547.
Krause, J., Stark, M., Deng, J., and Fei-Fei, L. (2013). 3d
object representations for fine-grained categorization.
In ICCVW.
LeCun, Y. and Cortes, C. (2010). MNIST handwritten digit
database.
Lee, K.-H., He, X., Zhang, L., and Yang, L. (2018). Clean-
net: Transfer learning for scalable image classifier
training with label noise. In CVPR.
Liu, C., Yu, H., Li, B., Shen, Z., Gao, Z., Ren, P., Xie, X.,
Cui, L., and Miao, C. (2021). Noise-resistant deep
metric learning with ranking-based instance selection.
In CVPR.
Loshchilov, I. and Hutter, F. (2016). Sgdr: Stochastic
gradient descent with warm restarts. arXiv preprint
arXiv:1608.03983.
Musgrave, K., Belongie, S. J., and Lim, S.-N. (2020a). A
metric learning reality check. In ECCV.
Musgrave, K., Belongie, S. J., and Lim, S.-N. (2020b). Py-
torch metric learning. ArXiv, abs/2008.09164.
Oh Song, H., Xiang, Y., Jegelka, S., and Savarese, S.
(2016). Deep metric learning via lifted structured fea-
ture embedding. In CVPR.
Qian, Q., Shang, L., Sun, B., Hu, J., Li, H., and Jin, R.
(2019). Softtriple loss: Deep metric learning without
triplet sampling. In ICCV.
Wah, C., Branson, S., Welinder, P., Perona, P., and Be-
longie, S. (2011). The caltech-ucsd birds-200-2011
dataset.
Wang, X., Han, X., Huang, W., Dong, D., and Scott, M. R.
(2019). Multi-similarity loss with general pair weight-
ing for deep metric learning. In CVPR.
Wang, X., Zhang, H., Huang, W., and Scott, M. R. (2020).
Cross-batch memory for embedding learning. In
CVPR.
Xia, X., Liu, T., Han, B., Gong, C., Wang, N., Ge, Z., and
Chang, Y. (2020). Robust early-learning: Hindering
the memorization of noisy labels. In ICLR.
Yao, Y., Sun, Z., Zhang, C., Shen, F., Wu, Q., Zhang, J.,
and Tang, Z. (2021). Jo-src: A contrastive approach
for combating noisy labels. In CVPR.
Yu, X., Han, B., Yao, J., Niu, G., Tsang, I., and Sugiyama,
M. (2019). How does disagreement help generaliza-
tion against label corruption? In ICML.
Nearest Neighbor-Based Data Denoising for Deep Metric Learning
603