Authors:
Julia Böhlke
1
;
Dimitri Korsch
1
;
Paul Bodesheim
1
and
Joachim Denzler
1
;
2
;
3
Affiliations:
1
Computer Vision Group, Friedrich Schiller University Jena, Ernst-Abbe-Platz 2, Jena, Germany
;
2
Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR), Institute for Data Science (IDW), Mälzerstraße 3, Jena, Germany
;
3
Michael Stifel Center Jena for Data-Driven and Simulation Science, Ernst-Abbe-Platz 2, Jena, Germany
Keyword(s):
Noisy Web Data, Label Noise Filtering, Fine-grained Categorization, Duplicate Detection.
Abstract:
Despite the availability of huge annotated benchmark datasets and the potential of transfer learning, i.e., fine-tuning a pre-trained neural network to a specific task, deep learning struggles in applications where no labeled datasets of sufficient size exist. This issue affects fine-grained recognition tasks the most since correct image data annotations are expensive and require expert knowledge. Nevertheless, the Internet offers a lot of weakly annotated images. In contrast to existing work, we suggest a new lightweight filtering strategy to exploit this source of information without supervision and minimal additional costs. Our main contributions are specific filter operations that allow the selection of downloaded images to augment a training set. We filter test duplicates to avoid a biased evaluation of the methods, and two types of label noise: cross-domain noise, i.e., images outside any class in the dataset, and cross-class noise, a form of label-swapping noise. We evaluate o
ur suggested filter operations in a controlled environment and demonstrate our methods’ effectiveness with two small annotated seed datasets for moth species recognition. While noisy web images consistently improve classification accuracies, our filtering methods retain a fraction of the data such that high accuracies are achieved with a significantly smaller training dataset.
(More)