(i)
(ii)
(iii)
Figure 8: Scenario b) Proximity Call – Stop – Going.
4 CONCLUSION
We presented a Human Robot Interaction system for
a service robot, able to recognize the performed
activity and to successfully react according to the
situations. In the pre-processing step, we have
proposed a view invariant transformation applied to
the data, captured by a Microsoft Kinect camera to
guarantee view invariant features. To achieve features
dimensionality reduction, an analysis of the most
informative joints has been performed while
concentrating on significant joints which highly
contribute to the human activity. Therefore, five
significant joints in the human skeleton have been
selected and used as input layer of a deep CNN. This
model has been tested in real time successfully,
hence, it presents a promising approach to social
robotics field where natural and intuitive human robot
interaction is needed. However, some further
developments are needed before the system can be
used in a real life. Therefore in the future, we want to
consider the use of other sensor modalities such as
depth maps and RGB sequences in order to add
additional contextual information and see what the
best architecture to fuse all these modalities is. This
should improve the activity recognition accuracy and
consequently will improve the interactivity of our
service robot.
REFERENCES
Bellarbi, A., Kahlouche, S., Achour, N., & Ouadah, N.
(2016, November). A social planning and navigation
for tour-guide robot in human environment. In 2016 8th
International Conference on Modelling, Identification
and Control (ICMIC) (pp. 622-627). IEEE.
https://doi.org/10.1109/ICMIC.2016.7804186.
Borja, J. A. T., Alzate, E. B., & Lizarazo, D. L. M. (2017,
October). Motion control of a mobile robot using kinect
sensor. In 2017 IEEE 3rd Colombian Conference on
Automatic Control (CCAC) (pp. 1-6). IEEE.
Faria, D. R., Vieira, M., Premebida, C., & Nunes, U. (2015,
August). Probabilistic human daily activity recognition
towards robot-assisted living. In 2015 24th IEEE
international symposium on robot and human
interactive communication (RO-MAN) (pp. 582-587).
Gaglio, S., Re, G. L., & Morana, M. (2014). Human activity
recognition process using 3-D posture data. IEEE
Transactions on Human-Machine Systems, 45(5), 586-
597.
Hascoet, T., Zhuang, W., Febvre, Q., Ariki, Y., &
Takiguchi, T. (2019). Reducing the Memory Cost of
Training Convolutional Neural Networks by CPU
Offloading. Journal of Software Engineering and
Applications, 12(8), 307-320. doi: 10.4236/jsea.2019.1
28019.
Kahlouche, S., & Belhocine, M. (2019, November). Human
Activity Recognition Based on Ensemble Classifier
Model. In International Conference on Electrical
Engineering and Control Applications (pp. 1121-1132).
Springer, Singapore. https://doi.org/10.1007/978-
981-15-6403-1_78
Kahlouche, S., Ouadah, N., Belhocine, M., &
Boukandoura, M. (2016, November). Human pose
recognition and tracking using RGB-D camera. In 2016
8th International Conference on Modelling,
Identification and Control (ICMIC) (pp. 520-525).
IEEE. DOI: 10.1109/ICMIC.2016.7804168.
Koppula, H. S., Gupta, R., & Saxena, A. (2013). Learning
human activities and object affordances from rgb-d
videos. The International journal of robotics research,
32(8), 951-970.