6 Conclusions and Future Works
The described framework aims to obtain a vision systems focused on the extraction of
information useful to understand human wills. We have described a possible
composition of several standard artificial vision algorithms for implementing an
intentional vision system to insert in a cognitive architecture. More extensive
experimentation is in progress to have better structure of collection of habits in order
to gain efficiency and precision. Different applicative scenarios will be considered to
have an exhaustive testing phase of the proposed architecture. Our intent is to include
more sophisticated reasoning and planning modules. For example, it would be really
interesting if the system could recognize when the human find difficult to accomplish
a task. Moreover, we are investigating on suited qualitative metrics to evaluate the
effectiveness of the robot behavior during collaboration phases.
Acknowledgements
We thank Giuseppe Arduino, and Dario Zangara for contributing hardware setup and
implementation of software code; we are also grateful to Professor Antonio Chella
and Dindo Haris for fruitful discussions about paper topics.
References
1. Bauckhage, C., Hanheide, M., et al., (2004), “A cognitive vision system for action
recognition in office environments”, proc of. IEEE Conference on Computer Vision and
Pattern Recognition (CVPR 2004), vol. 2, pp. 827-833.
2. Berg, T.L., Berg, A.C., et al., (2004), “Name and faces in the news”, proc of. IEEE
Conference on Computer Vision and Pattern Recognition (CVPR 2004), vol. 2, pp. 848-854.
3. Chella, A., Dindo, H., and Infantino, I., (2006), “People Tracking and Posture Recognition
for Human-Robot Interaction”, in proc. of International Workshop on Vision Based Human-
Robot Interaction, EUROS-2006.
4. Chella, A., and Infantino, I., (2004), “Emotions in a Cognitive Architecture for Human
Robot Interactions”, the 2004 AAAI Spring Symposium Series, March 2004, Stanford,
California.
5. Ekman, P., (1992), “An argument for basic emotions”, in Cognition and Emotion, vol. 6, no.
3-4, pp.169–200.
6. Ekman, P., and Friesen, W.V., (1978), Manual for the Facial Action Coding System,
Consulting Psychologists Press, Inc.
7. Fasel, B. and Luettin, J., (2003), “Automatic Facial Expression Analysis: A Survey”,
Pattern Recognition, vol. 36, no 1, pp.259-275.
8. Kuno, Y., Ishiyama, T., et al., (1999), “Combining observations of intentional and
unintentional behaviors for human-computer interaction”, in proc. of the SIGCHI
conference on Human factors in computing systems, Pittsburgh, Pennsylvania, USA, pp.
238-245.
9. Moeslund, T.B., and Granum, E., (2001), “A survey of computer vision-based human
motion capture”, Computer Vision and Image Understanding, vol. 18, pp. 231-268.
6161