Authors:
Warren Jouanneau
1
;
2
;
Aurélie Bugeau
2
;
3
;
Marc Palyart
1
;
Nicolas Papadakis
4
and
Laurent Vézard
1
Affiliations:
1
Lectra, F-33610 Cestas, France
;
2
Univ. Bordeaux, Bordeaux INP, CNRS, LaBRI, UMR 5800, F-33400 Talence, France
;
3
Institut Universitaire de France (IUF), France
;
4
Univ. Bordeaux, Bordeaux INP, CNRS, IMB, UMR 5251, F-33400 Talence, France
Keyword(s):
Partial, Unlabeled Learning, Patch-Based Method, Classification.
Abstract:
Supervised methods rely on correctly curated and annotated datasets. However, data annotation can be a cumbersome step needing costly hand labeling. In this paper, we tackle multi-label classification problems where only a single positive label is available in images of the dataset. This weakly supervised setting aims at simplifying datasets assembly by collecting only positive image exemples for each label without further annotation refinement. Our contributions are twofold. First, we introduce a light patch architecture based on the attention mechanism. Next, leveraging on patch embedding self-similarities, we provide a novel strategy for estimating negative examples and deal with positive and unlabeled learning problems. Experiments demonstrate that our architecture can be trained from scratch, whereas pre-training on similar databases is required for related methods from the literature.