6.2 Limitations
Despite the success, our work has a few limitations:
Our algorithm does not function very well at
extreme data points (large slowdowns, large
exaggerations) and can produce very unappealing
results when the coefficients are too large (>300%
change). In terms of the animation time-shift, our
method only allows for the systematic slow-down of
the randomly selected clips. To allow for more
functionality and a better metaphor between our user
interface (the curve) and the results, the ability to
speed-up the animation is lacking.
For various reasons discussed in Section 5, our
edited animations were not always preferred to the
original animations. These results can be explained
by a combination of: extreme coefficients applied
through the algorithm, user inexperience (at
animation) and a lack of context (human skeleton
instead of a cartoon character, blank setting instead
of a cartoon environment). Furthermore, our user
studies, particularly for Part A, pose a few issues.
The question “Which animation do you prefer” is
vague and subjective and thus gives us scattered
results. We suggest changing this to “Which
animation is more suitable for cartoon movies?” or
another similar question that connects more
appropriately to our study goals. In terms of the
participant pool, a better balance between genders (a
minimal ratio of 40% - 60%) would reduce bias,
particularly when it comes to the visual appeal of the
exaggerated feminine walk. Finally, to better tie in
with our system goals, the participants should have
been 3D animators, or at least have had some
experience with current cartoon animation methods.
6.3 Future Work
As future work, we’d like to point the research
topics in this direction:
Real-time editing: To make the editing process
more streamlined, we suggest the implementation of
the curve editing system in real- time. This would
allow users to make more “on the fly” editing
changes and fine tune the results.
Less restrictive foot constraints: Our foot
constraints reduced the amount of exaggeration in
joints below the hip. While this was necessary to
maintain an appropriate level of animation quality,
we suggest exploring ways to couple the feet
planting process with the exaggeration algorithm to
allow interesting modifications of the lower body
animations.
Use of cartoon-like character models and
settings when editing motion: The usage of
humanoid skeletons with realistic proportions poses
a few cognitive issues, as certain cartoon motions
can look awkward when applied to a realistic human
skeleton. This might not be because the animation
itself is inherently bad, but rather because it looked
out of place. We suggest further research to skin a
cartoon-like character to the skeleton to further
explore this issue.
ACKNOWLEDGEMENTS
We’d like to thank the School of Information
Technology at Carleton University for providing
access to their motion capture studio, as well as all
the participants who took the time to help us in our
data-collection phases and in our user-studies.
This project was funded by The Interactive and
Multi-Modal Experience Research Syndicate
(IMMERSe).
REFERENCES
Arikan, O., Forsyth, D. A., and O'Brien, J. F. 2003, July.
Motion synthesis from annotations. In ACM
Transactions on Graphics (TOG) (Vol. 22, No. 3, pp.
402-408). ACM.
Brand, M., and Hertzmann, A. 2000, July. Style machines.
In Proceedings of SIGGRAPH, (pp. 183-192). ACM
Bruderlin, A., and Williams, L. 1995, September. Motion
signal processing. In Proceedings of SIGGRAPH (pp.
97-104). ACM.
Etemad, S. A., and Arya, A. 2014. Classification and
translation of style and affect in human motion using
RBF neural networks. Neurocomputing, 129, 585-595.
Goodwin, N. C. 1987. Functionality and usability.
Communications of the ACM, 30(3), 229-233.
Grochow, K., Martin, S. L., Hertzmann, A., and Popović,
Z. 2004, August. Style-based inverse kinematics. In
ACM transactions on graphics (TOG) (Vol. 23, No. 3,
pp. 522-531). ACM.
Heck, R., and Gleicher, M. 2007, April. Parametric motion
graphs. In Proceedings of the 2007 symposium on
Interactive 3D graphics and games (pp. 129-136).
ACM.
Jenkins, O. C., and Matarić, M. J. 2004. Performance-
derived behavior vocabularies: Data-driven acquisition
of skills from motion. International Journal of
Humanoid Robotics, 1(02), 237-288.
Kim, J. H., Choi, J. J., Shin, H. J., and Lee, I. K. 2006.
Anticipation effect generation for character animation.
In Advances in Computer Graphics (pp. 639-646).
Springer Berlin Heidelberg.
Kovar, L., Schreiner, J., and Gleicher, M. 2002, July.
Footskate cleanup for motion capture editing. In