many methods had achieved fluid motions, like walk-
ing and running (Bergamin et al., 2019; Harvey and
Pal, 2018), they respond mostly only to the terrain the
character is standing on. On the other hand, some
deep learning models are able to recognize environ-
ment objects and interact with them (Starke et al.,
2019; Holden et al., 2020), but the actions are in-
corporated in the model so whenever new actions are
needed the model needs to be retrained every time.
Thus, we propose a mixed approach of deep learning
with traditional methods to generate character anima-
tions and interactions.
To overcome this issue and make it more acces-
sible, we use a Phase Functioned Neural Network
(PFNN) (Holden et al., 2017) for basic states of lo-
comotion, since it brings more freedom to the user,
and an animation blending process on top of the net-
work for animation transitioning to different custom
actions. Our work is limited to adding animations for
the character to do, depending on the object it inter-
acts with, by switching or mixing the PFNN output
with a pre-recorded animation, we utilize Unity3D
built-in components for physics, collisions and ani-
mations control, as well as an animation rigging li-
brary, for ease of use.
Our main contributions are as follows:
• We develop an implementation of PFNN with an
interaction system on top with assignable anima-
tions as needed.
• We propose the use of animation blend transitions
with inverse kinematics for pose correction.
• We present an analysis of our method and a com-
parison with state-of-the-art approaches.
This paper is organized as follows. Section 2
discusses related works. Section 3 introduces rele-
vant concepts, defines the problem and presents our
approach. Section 4 shows a experimental study to
prove the feasibility of our approach, then, we con-
clude the paper.
2 RELATED WORKS
In (Holden et al., 2017), the authors propose a novel
framework for the synthesis of movements called
Phase-Functioned Neural Network. In contrast to
other movement synthesis networks, this uses a par-
ticular time variable called Phase that is represented
by a Phase function as seen in equation 1. In this arti-
cle they used the Catmull-Rom Cubic Spline function
and changed the values of the weights and biases of
the network depending on the current phase.
On the other hand, our work utilizes the PFNN to
generate basic motion in real time. Nevertheless, we
added a simple to use interaction generation system
based on Inverse Kinematics (IK) to extend the reach
of the PFNN in a simple matter.
Motion Matching is a character animation tech-
nique that dependends on a lot of data, and a set
of algorithms that search the best suited animation
for the next frame. (Holden et al., 2020) propose a
state of the art mixed system, using the base of Mo-
tion Matching process, including the following algo-
rithms:
1. Compressor: overcomes the need of storing the
rotation and translation of the articulations of
character, by generating them using only the pa-
rameters of the articulations of the character.
2. Stepper: generates a delta that aids the production
of the next frame.
3. Projector: Finds the next most suitable step for the
animation using K nearest neighbours.
In the article, the authors describe the usage of 4
different neural networks to replace certain steps of
the algorithms, concluding in a more efficient Neural
Network approach to Motion Matching. Our method
describes a mixed approach such as Learned Motion
Matching, with a clear difference, the integration of
our method can be described as superficial, adding a
layer of interactivity to the known PFNN. In contrast
with Learn Motion Matching, our method does not
need to train to add more interactions neither store the
animation database in the application to generate the
animations.
In (Zhang et al., 2018), the authors propose the
usage of the output of one network (named Gating
Network) as blending coefficients of expert weights to
determine the dynamic weights for the Motion Predic-
tion Network, in contrast to the PFNN which weights
are calculated with a phase function. This gating net-
works allows the character to switch or blend different
locomotion phases according to the user input and ter-
rain variations the character is standing on. However,
our method uses the PFNN since it require less data
to train and can be stored in little space. In addition
to this, the MANN needed the expert weights of each
desired action so that they could blend.
In (Starke et al., 2019), the authors use a neural
network to determine which action (states), or blend
of actions, is needed in the next frame by detecting the
surrounding through many voxels around the charac-
ter in a cylindrical area, as well as the interaction to
objects being a voxelization projection of its shape,
adding to the network inputs along side the desired
motion and character pose given by the user. Similar
Neuranimation: Reactive Character Animations with Deep Neural Networks
253