for the real-time simulation of deformableobjects like
cloth and hair. Although a lot of work already is done
towards realistic rendering and simulation, research
usually is conducted in a standalone application with-
out embedding the algorithms into a wider field of
applications as is needed for e.g. X3D, which is not
only an open standard for interactive 3D graphics but
is also easy to learn for non-programmers. Thus we
proposed extensions of the standard, which were eval-
uated in two different scenarios.
One major goal was visual realism, where we face
the same problems as research on humanoid robots:
the ”uncanny valley”, a hypothesis introduced by M.
Mori in 1970. He states that as a robot is made more
human-like, the emotional response from a human be-
ing to the robot will become increasingly positive and
empathic, until a point is reached beyond which it
becomes strongly repulsive. But as appearance and
motion continue to become less distinguishable from
a human being’s, the emotional response becomes
positive again and approaches human-human empa-
thy levels. This also holds for virtual characters, and
for convincing results we have to come very close to
human-like appearance and behavior. We have devel-
oped powerful algorithms to improve rendering, even
taken care of dynamics. But by simply applying them
to a character model leaves us deeply stuck inside the
uncanny valley without attaining convincing results,
because parameter optimization is still tedious and
has to be done by experts. An example is the forth-
coming game from Crytek, which was postponed sev-
eral times although lots of people are working on it.
So, automatic generation of realistic virtual humans is
not possible without human intervention. One solu-
tion could be to setup libraries a user can choose from
and to have authoring tools, which guide through the
creation process, starting at a very coarse level, and
refining the choices step by step.
Concerning recorded motion capturing data the
biggest problem was the data quality. Without heavy
manual work one will face ”floating” characters or
strange artifacts when blending between two very dif-
ferent poses. Since any kind of blending does ”inter-
polation” in some way there will be always cases were
blending will fail and deliver unsatisfactory results.
Without model knowledge or very accurate animation
data we will not be able to blend animations convinc-
ingly. So, prerecorded animation must be planned ac-
curately. It should be defined which is the starting
and which is the ending pose as well as which joints
are involved. Blending between very different poses
therefore should be avoided. To increase flexibility
research should focus on automatic real-time capable
methods for the creation of animation data.
ACKNOWLEDGEMENTS
This work was part of the project Virtual Human
funded by the German ministry for edu. and research.
REFERENCES
Alexa, M., Behr, J., and M¨uller, W. (2000). The morph
node. Web3D - VRML 2000 Proc., pages 29–34.
Anjyo, K.-I., Usami, Y., and Kurihara, T. (1992). A simple
method for extracting the natural beauty of hair. In
SIGGRAPH ’92, pages 111–120. ACM Press.
Avalon (2007). Avalon. http://www.instantreality.org/.
Ekman, P. (1982). Emotion in the human face. Cambridge
University Press.
Green, S. (2004). Real-Time Approximations to Subsurface
Scattering, pages 263–278. Add. Wes.
Hildenbrand, D. (2005). Geometric computing in computer
graphics using conformal geometric algebra. In CG
2005, volume 29, pages 802–810.
Jung, Y. and Kn¨opfle, C. (2006). Dynamic aspects of real-
time face-rendering. In VRST 2006, pages 193–196,
New York. ACM: VRST Cyprus 2006.
Jung, Y., Rettig, A., Klar, O., and Lehr, T. (2005). Realistic
real-time hair simulation and rendering. In VVG 05,
pages 229–236, Aire-la-Ville. Eurographics Assoc.
Kalra, P. and Magnenat-Thalmann, N. (1994). Modeling
of vascular expressions. In Computer Animation ’94,
pages 50–58, Geneva.
Multon, F., France, L., Cani-Gascuel, M.-P., and Debunne,
G. (1999). Computer animation of human walking:
a survey. The Journal of Visualization and Computer
Animation, 10(1):39–54.
Park, S., Shin, H., Kim, T., and Shin, S. (2002). Online
locomotion generation based on motion blending. In
ACM Symposium on Computer Animation.
Park, S., Shin, H., Kim, T., and Shin, S. (2004). Online
motion blending for real-time locomotion generation.
In Comp. Anim. and Virt. Worlds. John Wiley a. sons.
Piwek, P., Krenn, B., Schr¨oder, M., Grice, M., Baumann, S.,
and Pirker, H. (2002). Rrl: A rich repr. lang. for the
desc. of agent behaviour in neca. In Proc. of WS ”Em-
bodied convers. agents, let’s spec. and eval. them”.
Scheuermann, T. (2004). Practical real-time hair rendering
and shading. Siggraph 04 Sketches.
Web3DConsortium (2006). H-Anim.
http://www.web3d.org/x3d/specifications/ISO-
IEC-19774-HumanoidAnimation/.
Web3DConsortium (2007). Extensible 3D (X3D).
http://www.web3d.org/x3d/specifications/ISO-IEC-
19775-X3DAbstractSpecification
Revision1 to Part1/.
Xj3D (2004). Xj3d dynamic texture rendering ext.
http://www.xj3d.org/extensions/render
texture.html.
GRAPP 2008 - International Conference on Computer Graphics Theory and Applications
394