loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Jigyasa Singh Katrolia 1 ; Lars Krämer 2 ; Jason Rambach 1 ; Bruno Mirbach 1 and Didier Stricker 1 ; 2

Affiliations: 1 German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany ; 2 Technische Universität Kaiserslautern, Kaiserslautern, Germany

Keyword(s): Domain Adaptation, Adversarial Training, Time-of-flight, Synthetic Data, Depth Image, Image Translation.

Abstract: In absence of sufficient labeled training data, it is common practice to resort to synthetic data with readily available annotations. However, some performance gap still exists between deep learning models trained on synthetic versus on real data. Using adversarial training based generative models, it is possible to translate images from synthetic to real domain and train on them easily generalizable models for real-world datasets, but the efficiency of this method is limited in the presence of large domain shifts such as between synthetic and real depth images characterized by depth sensor and scene dependent artifacts in the image. In this paper, we present an adversarial training based framework for adapting depth images from synthetic to real domain. We use a cyclic loss together with an adversarial loss to bring the two domains of synthetic and real depth images closer by translating synthetic images to real domain, and demonstrate the usefulness of synthetic images modified thi s way for training deep neural networks that can perform well on real images. We demonstrate our method for the application of person detection and segmentation in real-depth images captured in a car for in-cabin person monitoring. We also show through experiments the effect of using target domain image sets captured using different types of depth sensors on this domain adaptation approach. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.145.156.46

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Katrolia, J.; Krämer, L.; Rambach, J.; Mirbach, B. and Stricker, D. (2021). An Adversarial Training based Framework for Depth Domain Adaptation. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 4: VISAPP; ISBN 978-989-758-488-6; ISSN 2184-4321, SciTePress, pages 353-361. DOI: 10.5220/0010252403530361

@conference{visapp21,
author={Jigyasa Singh Katrolia. and Lars Krämer. and Jason Rambach. and Bruno Mirbach. and Didier Stricker.},
title={An Adversarial Training based Framework for Depth Domain Adaptation},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 4: VISAPP},
year={2021},
pages={353-361},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010252403530361},
isbn={978-989-758-488-6},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 4: VISAPP
TI - An Adversarial Training based Framework for Depth Domain Adaptation
SN - 978-989-758-488-6
IS - 2184-4321
AU - Katrolia, J.
AU - Krämer, L.
AU - Rambach, J.
AU - Mirbach, B.
AU - Stricker, D.
PY - 2021
SP - 353
EP - 361
DO - 10.5220/0010252403530361
PB - SciTePress