loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Tim Büchner 1 ; Sven Sickert 1 ; Gerd Volk 2 ; Christoph Anders 3 ; Orlando Guntinas-Lichius 3 and Joachim Denzler 2

Affiliations: 1 Computer Vision Group, Friedrich Schiller University Jena, Jena, Germany ; 2 Department of Otolaryngology, University Hospital Jena, Jena, Germany ; 3 Division of Motor Research, Pathophysiology and Biomechanics, Clinic for Trauma, Hand and Reconstructive Surgery, University Hospital Jena, Jena, Germany

Keyword(s): Faces, Reconstruction, sEMG, Cycle-GAN, Facial Action Coding System, Emotions.

Abstract: The human face is one of the most crucial parts in interhuman communication. Even when parts of the face are hidden or obstructed the underlying facial movements can be understood. Machine learning approaches often fail in that regard due to the complexity of the facial structures. To alleviate this problem a common approach is to fine-tune a model for such a specific application. However, this is computational intensive and might have to be repeated for each desired analysis task. In this paper, we propose to reconstruct obstructed facial parts to avoid the task of repeated fine-tuning. As a result, existing facial analysis methods can be used without further changes with respect to the data. In our approach, the restoration of facial features is interpreted as a style transfer task between different recording setups. By using the CycleGAN architecture the requirement of matched pairs, which is often hard to fullfill, can be eliminated. To proof the viability of our approach, we com pare our reconstructions with real unobstructed recordings. We created a novel data set in which 36 test subjects were recorded both with and without 62 surface electromyography sensors attached to their faces. In our evaluation, we feature typical facial analysis tasks, like the computation of Facial Action Units and the detection of emotions. To further assess the quality of the restoration, we also compare perceptional distances. We can show, that scores similar to the videos without obstructing sensors can be achieved. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.137.198.143

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Büchner, T.; Sickert, S.; Volk, G.; Anders, C.; Guntinas-Lichius, O. and Denzler, J. (2023). Let’s Get the FACS Straight: Reconstructing Obstructed Facial Features. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 727-736. DOI: 10.5220/0011619900003417

@conference{visapp23,
author={Tim Büchner. and Sven Sickert. and Gerd Volk. and Christoph Anders. and Orlando Guntinas{-}Lichius. and Joachim Denzler.},
title={Let’s Get the FACS Straight: Reconstructing Obstructed Facial Features},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP},
year={2023},
pages={727-736},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011619900003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP
TI - Let’s Get the FACS Straight: Reconstructing Obstructed Facial Features
SN - 978-989-758-634-7
IS - 2184-4321
AU - Büchner, T.
AU - Sickert, S.
AU - Volk, G.
AU - Anders, C.
AU - Guntinas-Lichius, O.
AU - Denzler, J.
PY - 2023
SP - 727
EP - 736
DO - 10.5220/0011619900003417
PB - SciTePress