et al., 2013), or by looking at physiological measu-
res, such as the variation in the heart rate, when per-
forming the interaction tasks in virtual or augmented
scenarios (Chessa et al., 2016). However, if question-
naires or physical measures result in highly subjective
feedbacks, the quantitative analysis adopted so far fo-
cuses on a very limited part of the interaction, leading
to an analysis which only partially evaluate the qua-
lity of the interaction itself.
To overcome this limits, in this paper we propose
to analyse how actions are performed in simple aug-
mented reality scenarios, by adopting motion quali-
ties to provide an investigation of the overall level of
comfort during the interaction based on indicators of
the naturalness of the movements. More specifically,
we address the problem by considering motion featu-
res able to capture on the one hand the geometrical
properties of the movements – i.e. how the action
evolves over time from the spatial point of view – on
the other the dynamic of the motion – in terms of the
hand velocity. We draw inspiration from well known
regularities of biological motion, which are the out-
come of the fact, as human beings, our movements
are constrained by our physicalness. Different, yet re-
lated, theories have been formulated, see for instance
the minimun-jerk and isochrony principles (Viviani
and Flash, 1995). Among them, we specifically re-
fer to the Two-Thirds Power Law – a well-known in-
variant of upper limb human movements (Viviani and
Stucchi, 1992) – which provides a mathematical mo-
del of the mutual influence between shape and kine-
matics of human movements (Greene, 1972).
More specifically, the law describes an exponential
relation between velocity and curvature of the mo-
tion caused by a physical event, and in the case of
human motion (end-point movement in particular) it
has been shown that the exponent is very close to the
reference value of 1/3. In our work, we adopt the
empirical formulation of the law proposed in (Noceti
et al., 2015) and and successfully applied to recog-
nise human motion from visual data. Here, we use
the obtained statistics to compare the quality of in-
teraction in augmented reality scenarios with a com-
parable real world scenario. We consider interaction
sessions in such scenarios in terms of repetitions of
reaching actions towards a common reference target,
presented following different visualization strategies.
More specifically, we compare a classical 2D and ste-
reoscopic visualization; as for the interaction, the use
of a virtual avatar of a hand is compared with more
natural the use of the real hand. It is worth noting that
we take into account a very simple augmented reality
scenario and interaction tasks, in order to focus on the
way the action is performed, by limiting the degree of
freedoms and the perceptual cues that, of course, in-
fluence the movements.
The remainder of the paper is organized as fol-
low. In Sec. 2 we present the augmented reality setup
and provide details of the different visualization mo-
dalities, which are thoroughly compared in Sec. 3 –
where we describe the data collection and the expe-
rimental analysis. Sec. 4 is left to conclusions and
future lines of research.
2 MATERIAL AND METHODS
2.1 The Augmented Reality Setup
The experimental evaluation described in this paper
has been performed by using a setup composed of the
following modules:
- Visualization of the virtual scene on a large field
of view display, which can be used in both ste-
reoscopic and non-stereoscopic mode. In such a
situation, virtual objects appear overlaid onto the
real scene (e.g. the desktop, the surroundings of
the room). Thus, this setup does not represent an
implementation of immersive virtual reality, but
an augmented or mixed reality setup. Indeed, vir-
tual and real stimuli coexist at the same time. The
virtual scene is designed and rendered by using
Unity3D engine.
- Acquisition of the position of the user’s hands, by
using Leap Motion controller, a small USB perip-
heral device designed to be placed near a physical
desktop. The device is able to track the fine mo-
vements of the hands and the fingers in a roughly
hemispherical volume of about 1 meter above the
sensor at an acquisition frequency of about 120
Hz. The accuracy claimed for the detection of
the fingertips is approximately 0.01 mm. Howe-
ver, in (Weichert et al., 2013) the average accu-
racy of the controller was showed to be about 0.7
mm that allows us to effectively use the Leap Mo-
tion in our setup, for the purposes of the experi-
ments. The 3D positions for the 5 fingers, and
for the palm center, of each hand, are available.
Such information is used both inside the augmen-
ted reality environments to render the avatars of
the hands in the modalities in which they are re-
quired, and saved onto files, for the quantitative
evaluation which will be explained in this paper.
Investigating Natural Interaction in Augmented Reality Environments using Motion Qualities
111