place and ball-throwing data into movement build-
ing blocks with a bell-shaped velocity profile using
a probabilistic algorithm formerly presented in (Sen-
ger et al., 2014). Furthermore, we showed that using
the simple 1NN classification, the obtained segments
can be reliably classified into predefined categories.
Especially, this can be done using a small set of train-
ing data. In comparison to HMM-based movement
classification, a considerably higher accuracy can be
achieved with small training sets.
For future work, an integrated algorithm for seg-
mentation and classification should be developed, in
which both motion analysis parts influence each other.
Such an approach becomes for example relevant when
extra segments are generated. Extra segments may be
caused from not fluently executed movements of the
demonstrator in situations in which he slowed down
his movement to think about the exact position to
place an object. Such extra segments could be merged
by identifying that only their concatenation belongs to
one of the known movement classes.
To gain a higher classification accuracy, more so-
phisticated feature extraction techniques may be of
high interest. Mainly in the analysis of manipulation
movements, features based on the joint angles should
be evaluated.
Furthermore, it is desirable that the manual ef-
fort needed for classification is further minimized by
classifying the movement segments using an unsu-
pervised approach. Nonetheless, annotations, like
move object, are needed in many applications, e.g.
to select segments that should be imitated by a robot.
Ideally, this annotation is done without manual inter-
ference, e.g., by analyzing features of the movement
arising from different modalities. Besides the analysis
of motion data, psychological data like eye-tracking
or electroencephalographic-data could be used for
this annotation.
Simple approaches as the one presented here be-
come highly relevant for the development of embed-
ded multimodal interfaces. They allow to use minia-
turized processing units with relatively low process-
ing power and energy consumption. This is most
relevant since in many robotic applications extra re-
sources for interfacing are limited and will thus re-
strict the integration of interfaces into a robotic sys-
tem. On the other hand, wearable assisting devices
are also limited in size, energy and computing power.
Hence, future approaches must not only focus on ac-
curacy but also on simplicity. Apart from that, our
results show that both, accuracy and simplicity can be
accomplished.
REFERENCES
Aarno, D. and Kragic, D. (2008). Motion intention recog-
nition in robot assisted applications. Robotics and Au-
tonomous Systems, 56:692–705.
Adi-Japha, E., Karni, A., Parnes, A., Loewenschuss, I.,
and Vakil, E. (2008). A shift in task routines dur-
ing the learning of a motor skill: Group-averaged data
may mask critical phases in the individuals’ acquisi-
tion of skilled performance. Journal of Experimen-
tal Psychology: Learning, Memory, and Cognition,
24:1544–1551.
Argall, B. D., Chernova, S., Veloso, M., and Browning, B.
(2009). A survey of robot learning from demonstra-
tion. Robotics and Autonomous Systems, 57(5):469–
483.
Fearnhead, P. and Liu, Z. (2007). On-line inference for mul-
tiple change point models. Journal of the Royal Sta-
tistical Society: Series B (Statistical Methodology),
69:589–605.
Fod, A., Matri
´
c, M., and Jenkins, O. (2002). Automated
derivation of primitives for movement classification.
Autonomous Robots, 12:39–54.
Gong, D., Medioni, G., and Zhao, X. (2013). Structured
time series analysis for human action segmentation
and recognition. IEEE Transactions on Pattern Anal-
ysis and Machine Intelligence, 36(7):1414–1427.
Gr
¨
ave, K. and Behnke, S. (2012). Incremental action
recognition and generalizing motion generation based
on goal-directed features. In International Confer-
ence on Intelligent Robots and Systems (IROS), 2012
IEEE/RSJ, pages 751–757.
Graybiel, A. (1998). The basal ganglia and chunking of ac-
tion repertoires. Neurobiology of Learning and Mem-
ory, 70:119–136.
Kirchner, E. A., de Gea Fernand
´
ez, J., Kampmann, P.,
Schr
¨
oer, M., Metzen, J. H., and Kirchner, F. (2015).
Intuitive Interaction with Robots - Technical Ap-
proaches and Challenges, pages 224–248. Springer
Verlag GmbH Heidelberg.
Kuli
´
c, D., Ott, C., Lee, D., Ishikawa, J., and Nakamura,
Y. (2012). Incremental learning of full body mo-
tion primitives and their sequencing through human
motion observation. The International Journal of
Robotics Research, 31(3):330–345.
Metzen, J. H., Fabisch, A., Senger, L., Gea Fern
´
andez,
J., and Kirchner, E. A. (2013). Towards learn-
ing of generic skills for robotic manipulation. KI -
K
¨
unstliche Intelligenz, 28(1):15–20.
Morasso, P. (1981). Spatial control of arm movements. Ex-
perimental Brain Research, 42:223–227.
M
¨
ulling, K., Kober, J., Koemer, O., and J.Peters (2013).
Learning to select and generalize striking movements
in robot table tennis. The International Journal of
Robotics Research, 32:263–279.
Pastor, P., Hoffmann, H., Asfour, T., and Schaal, S. (2009).
Learning and generalization of motor skills by learn-
ing from demonstration. In 2009 IEEE International
Conference on Robotics and Automation, pages 763–
768. Ieee.
PhyCS 2016 - 3rd International Conference on Physiological Computing Systems
62