Authors:
Jigyasa Singh Katrolia
1
;
Lars Krämer
2
;
Jason Rambach
1
;
Bruno Mirbach
1
and
Didier Stricker
1
;
2
Affiliations:
1
German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
;
2
Technische Universität Kaiserslautern, Kaiserslautern, Germany
Keyword(s):
Domain Adaptation, Adversarial Training, Time-of-flight, Synthetic Data, Depth Image, Image Translation.
Abstract:
In absence of sufficient labeled training data, it is common practice to resort to synthetic data with readily available annotations. However, some performance gap still exists between deep learning models trained on synthetic versus on real data. Using adversarial training based generative models, it is possible to translate images from synthetic to real domain and train on them easily generalizable models for real-world datasets, but the efficiency of this method is limited in the presence of large domain shifts such as between synthetic and real depth images characterized by depth sensor and scene dependent artifacts in the image. In this paper, we present an adversarial training based framework for adapting depth images from synthetic to real domain. We use a cyclic loss together with an adversarial loss to bring the two domains of synthetic and real depth images closer by translating synthetic images to real domain, and demonstrate the usefulness of synthetic images modified thi
s way for training deep neural networks that can perform well on real images. We demonstrate our method for the application of person detection and segmentation in real-depth images captured in a car for in-cabin person monitoring. We also show through experiments the effect of using target domain image sets captured using different types of depth sensors on this domain adaptation approach.
(More)