Authors:
Sebastian P. Kleinschmidt
and
Bernardo Wagner
Affiliation:
Leibniz Universität Hannover, Germany
Keyword(s):
Virtual Environments, Augumented Reality, Sensorfusion, GPU-acceleration.
Related
Ontology
Subjects/Areas/Topics:
Human-Robots Interfaces
;
Informatics in Control, Automation and Robotics
;
Robotics and Automation
;
Telerobotics and Teleoperation
;
Virtual Environment, Virtual and Augmented Reality
Abstract:
In this paper, a new virtual reality (VR) control concept for operating robots in search and rescue (SAR) scenarios is introduced. The presented approach intuitively provides different sensor signals as RGB, thermal and active infrared images by projecting them onto 3D structures generated by a Time of Flight (ToF)-based depth camera. The multichannel 3D data are displayed using an Oculus Rift head-up-display providing additional head tracking information. The usage of 3D structures can improve the perception of scale and depth by providing stereoscopic images which cannot be generated for stand-alone 2D images. Besides the described operating concept, the main contributions of this paper are the introduction of an hybrid calibration pattern for multi-sensor calibration and a high performance 2D-to-3D mapping procedure. To ensure low latencies, all steps of the algorithm are performed parallelly on a graphics processing unit (GPU) which reduces the traditional processing time on a ce
ntral processing unit (CPU) by 80.03%. Furthermore, different input images are merged according to their importance for the operator to create a multi-sensor point cloud.
(More)