Table 2: Evaluation Results. The generated motions are
generated using the standard Viterbi algorithm.
Motion Comments
Shaking hands 5/10 - generated motion is floaty
Pushing 3/10 - original motion is wobbly
1/10 - generated person did not touch
another
Pulling no comments
5 CONCLUSIONS AND FUTURE
WORK
In this paper, we presented a new approach for gener-
ating interactive behaviours for virtual characters us-
ing the windowed Viterbi algorithm. We also com-
pared the performance of the standard Viterbi algo-
rithm and the windowed Viterbi algorithm.
To this end, we trained a dual HMM representing
interactive behaviours of two people. Then we used a
sequence of 3D poses of a person in conjunction with
dual HMM to generate the responsive behaviour for
a virtual character using the windowed Viterbi algo-
rithm. From the analysis of the results and evaluation
experiments, it is clear that the windowed Viterbi al-
gorithm can generate very similar behaviours to the
real behaviours. In addition, the windowed Viterbi
method does not require the full observation sequence
before the processing starts, thus it can be used in a
real-time system.
In our new work, we take advantage of the win-
dowed Viterbi algorithm in a new approach to mod-
elling motion with a goal of obtaining better tracking
and generating results.
REFERENCES
Cosker, D., Marshall, D., Rosin, P. L., and Hicks, Y. (2004).
Speech driven facial animation using a hidden markov
coarticulation model. IEEE ICPR, 1:314–321.
Deutscher, J., Blake, A., and Reid, I. (2000). Articulated
Body Motion Capture by Annealed Particle Filtering.
IEEE CVPR Proceedings, 2:126–133.
Jebara, T. and Pentland, A. (2002). Statistical Imitative
Learning from Perceptual Data. Proceedings of the
2nd International Conference on Development and
Learning.
Johansson, G. (1973). Visual perception of biological mo-
tion and a model for its analysis. Perception and Psy-
chophysics, 14(2):201–211.
Johnson, N., Galata, A., and Hogg, D. (1998). The acqui-
sition and use of interaction behaviour model. IEEE
CVPR Proceedings, pages 866–871.
Meredith, M. and Maddock, S. (2005). Adapting motion
capture data using weighted real-time inverse kine-
matics. Comput. Entertain., 3(1):5–5.
Pilu, M. (2004). Video stabilization as a variational problem
and numerical solution with the viterbi method. IEEE
CVPR, 1:625–630.
Rabiner, L. R. (1989). A tutorial on hidden markov models
and selected applications in speech recognition. IEEE
Proceedings, 77(2):257–286.
Rybski, P. E. and Veloso, M. M. (2005). Robust real-
time human activity recognition from tracked face dis-
placements. Proceedings of the 12th Portuguese Con-
ference on Artificial Intelligence, pages 87–98.
Wen, G., Wang, Z., Xia, S., and Zhu, D. (2006). From
motion capture data to character animation. VRST ’06:
Proceedings of the ACM symposium on Virtual reality
software and technology, pages 165–168.
Zheng, Y., Hicks, Y., Cosker, D., Marshall, D., Mostaza,
J. C., and Chambers, J. A. (2006). Virtual
friend: Tracking and generating natural interactive be-
haviours in real video. IEEE ICSP.
Zordan, V. B. and Horst, N. C. V. D. (2003). Mapping op-
tical motion capture data to skeletal motion using a
physical model. pages 245–250.
GRAPP 2008 - International Conference on Computer Graphics Theory and Applications
358