Authors:
Robin Horst
1
;
Sebastian Alberternst
2
;
Jan Sutter
2
;
Philipp Slusallek
2
;
Uwe Kloos
3
and
Ralf Dörner
4
Affiliations:
1
German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany, Reutlingen University of Applied Sciences, Reutlingen, Germany, RheinMain University of Applied Sciences, Wiesbaden and Germany
;
2
German Research Center for Artificial Intelligence (DFKI), Saarbrücken and Germany
;
3
Reutlingen University of Applied Sciences, Reutlingen and Germany
;
4
RheinMain University of Applied Sciences, Wiesbaden and Germany
Keyword(s):
Mixed Reality, Video Avatar, Multi-user Environments, Low-cost, Computer-supported Cooperative Work, Image Segmentation.
Related
Ontology
Subjects/Areas/Topics:
Augmented, Mixed and Virtual Environments
;
Computer Vision, Visualization and Computer Graphics
;
Distributed Augmented, Mixed and Virtual Reality
;
Interactive Environments
Abstract:
Representing users within an immersive virtual environment is an essential functionality of a multi-person virtual reality system. Especially when communicative or collaborative tasks must be performed, there exist challenges about realistic embodying and integrating such avatar representations. A shared comprehension of local space and non-verbal communication (like gesture, posture or self-expressive cues) can support these tasks. In this paper, we introduce a novel approach to create realistic, video-texture based avatars of co-located users in real-time and integrate them in an immersive virtual environment. We show a straight forward and low-cost hard- and software solution to do so. We discuss technical design problems that arose during implementation and present a qualitative analysis on the usability of the concept from a user study, applying it to a training scenario in the automotive sector.