Authors:
Annika Mütze
1
;
Matthias Rottmann
1
;
2
and
Hanno Gottschalk
1
Affiliations:
1
IZMD & School of Mathematics and Natural Sciences, University of Wuppertal, Wuppertal, Germany
;
2
School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland
Keyword(s):
Domain Adaptation, Image-to-Image Translation, Generative Adversarial Networks, Semantic Segmentation, Semi-Supervised Learning, Real2Sim.
Abstract:
Domain adaptation is of huge interest as labeling is an expensive and error-prone task, especially on pixel-level like for semantic segmentation. Therefore, one would like to train neural networks on synthetic domains, where data is abundant. However, these models often perform poorly on out-of-domain images. Image-to-image approaches can bridge domains on input level. Nevertheless, standard image-to-image approaches do not focus on the downstream task but rather on the visual inspection level. We therefore propose a “task aware” generative adversarial network in an image-to-image domain adaptation approach. Assisted by some labeled data, we guide the image-to-image translation to a more suitable input for a semantic segmentation network trained on synthetic data. This constitutes a modular semi-supervised domain adaptation method for semantic segmentation based on CycleGAN where we refrain from adapting the semantic segmentation expert. Our experiments involve evaluations on complex
domain adaptation tasks and refined domain gap analyses using from-scratch-trained networks. We demonstrate that our method outperforms CycleGAN by 7 percent points in accuracy in image classification using only 70 (10%) labeled images. For semantic segmentation we show an improvement of up to 12.5 percent points in mean intersection over union on Cityscapes using up to 148 labeled images.
(More)