Figure 19: Heart rate graph.
beats per minute, what was checked with the same
sensor before the experiment.
4 CONCLUSIONS AND FUTURE
WORK
This paper describes how a Mixed Reality HRI can
be used in a real hazardous scenario where a redun-
dant manipulator is used via teleoperation of an op-
erator to perform an inspection taks. It presented the
robotic system’s architecture and functionalities that
were achieved. The comparison of the standard 2D
GUI and the 3D Mixed Reality GUI was made in an
experiment where the workload and operator’s bio-
metric parameters were measured. The main con-
clusion from this experiment was that the teleoper-
ation with 3D MR HRI mitigates cognitive fatigue
and stress by improving robot’s pose and scene un-
derstanding by an operator.
To allow for Cartesian control using this robot,
the pose control problem of the redundant manipula-
tor will be studied and different solutions (Jacobian-
based, heuristic) will be tested.
To increase the trust and robustness of the simu-
lated environment, real-time point cloud and environ-
ment reconstruction is an important evolution in the
Mixed Reality control. Future work will be dedicated
to integrate these functionalities.
As the 3D environment has been rendered on a
2D screen, the immersion and experience of the user
is much better with larger screens. A bigger scene im-
mersion may be achieved with the use of a VR head-
set and/or different input devices, which could also be
controlled with gestures or VR controllers. However,
it requires more preparation and setup from the oper-
ator in a real intervention where the access time in the
area may be limited. Future work will also integrate
and evaluate the effectiveness of these devices when
performing common teleoperation tasks at CERN.
Further study and tests will be done to compare
the reading with GSR measurement, heartbeat rate
and camera recordings to select the factors that clearly
impact the operator heart beat and GSR resistance. If
the effect is immediate, the values could be communi-
cated to the operator in the GUI in order to let him or
her realize about the body reaction to the performed
task.
REFERENCES
Almagro, C. V., Di Castro, M., Lunghi, G., Prades, R. M.,
Valero, P. J. S., P
´
erez, M. F., and Masi, A. (2019).
Monocular robust depth estimation vision system for
robotic tasks interventions in metallic targets. Sensors
(Switzerland), 19(14):1–28.
Almeida, L., Menezes, P., and Dias, J. (2020). Interface
Transparency Issues in Teleoperation. Applied Sci-
ences, 10(18).
Andr
´
es Mart
´
ın Barrio (2020). Design, modelling, control
and teleoperation of hyper- redundant robots. 2020.
Chacko, S. M. and Kapila, V. (2019). An Augmented Real-
ity Interface for Human-Robot Interaction in Uncon-
strained Environments. IEEE International Confer-
ence on Intelligent Robots and Systems, pages 3222–
3228.
de la Cruz, M., Casa
˜
n, G., Sanz, P., and Mar
´
ın, R. (2020).
Preliminary Work on a Virtual Reality Interface for
the Guidance of Underwater Robots. Robotics, 9(4).
Di Castro, M., Ferre, M., and Masi, A. (2018a). CERN-
TAURO: A modular architecture for robotic in-
spection and telemanipulation in harsh and Semi-
Structured environments. IEEE Access, 6:37506–
37522.
Di Castro, M., Tambutti, M. L. B., Ferre, M., Losito,
R., Lunghi, G., and Masi, A. (2018b). i-TIM: A
Robotic System for Safety, Measurements, Inspection
and Maintenance in Harsh Environments. In 2018
IEEE International Symposium on Safety, Security,
and Rescue Robotics (SSRR), pages 1–6.
Hart, S. G. (2006). Nasa-Task Load Index (NASA-TLX):
20 Years Later. In Proceedings of the human factors
and ergonomics society annual meeting, 50:904–908.
Lunghi, G., Marin, R., Castro, M. D., Masi, A., and
Sanz, P. J. (2019). Multimodal Human-Robot Inter-
face for Accessible Remote Robotic Interventions in
Hazardous Environments. IEEE Access, 7:127290–
127319.
Mohan, P. M., Nagarajan, V., and Das, S. R. (2016). Stress
measurement from wearable photoplethysmographic
sensor using heart rate variability data. In 2016 In-
ternational Conference on Communication and Signal
Processing (ICCSP), pages 1141–1144.
Veiga Almagro, C., Lunghi, G., Di Castro, M., Cen-
telles Beltran, D., Mar
´
ın Prades, R., Masi, A., and
Sanz, P. J. (2020). Cooperative and Multimodal
Capabilities Enhancement in the CERNTAURO Hu-
man–Robot Interface for Hazardous and Underwater
Scenarios. Applied Sciences, 10(17).
From 2D to 3D Mixed Reality Human-Robot Interface in Hazardous Robotic Interventions with the Use of Redundant Mobile Manipulator
395