(a) Fashion2: Raw sequence frames 292-299
(b) Fashion2: Blended sequence frames 292-299
Figure 6: Smoothness error distribution on the surface between frames.
6 CONCLUSIONS
In conlusion, the process that have been presented
provides a smooth motion path for concatenation of
human motion synthesis from 3D video sequences.
In contrast to using depth cameras (Microsoft Kinect)
and annotated markers (Flagg et al., 2009), we have
shown that in the absence of skeletal information, us-
ing automatically detected surface correspondences
from SIFT and MeshHOG, an intermediate surface
motion can be reconstructed to create a seamless mo-
tion transfer between sequences. The process in-
cludes using Laplacian mesh deformation and linear
blending methods to preserve the non-rigid dynam-
ics of the surface. Work is in progress to include ad-
ditional coarse correspondences for filling in regions
without any features to facilitate greater flexibility in
the re-use of motion sequences. Further emphasis
is being placed on making surface feature matching
temporally consistent, similar to that of (Budd et al.,
2013) which uses patch mesh, to allow reliable esti-
mation of a consistent structure.
REFERENCES
Arikan, O. and Forsyth, D. A. (2002). Interactive mo-
tion generation from examples. In ACM SIGGRAPH,
pages 483–490.
Baran, I., Vlasic, D., Grinspun, E., and Popovi´c, J. (2009).
Semantic deformation transfer. In ACM SIGGRAPH,
pages 1–6.
Budd, C., Huang, P., Klaudiny, M., and Hilton, A. (2013).
Global non-rigid alignment of surface sequences. In-
ter. Journal of Computer Vision, 102(1-3):256–270.
Cignoni, P., Rocchini, C., and Scopigno, R. (1998). Metro:
measuring error on simplified surfaces. Computer
Graphics Forum, 17(2):167–174.
de Aguiar, E., Stoll, C., Theobalt, C., Ahmed, N., Seidel,
H.-P., and Thrun, S. (2008). Performance capture
from sparse multi-view video. In ACM SIGGRAPH.
Doshi, A., Starck, J., and Hilton, A. (2010). An empirical
study of non-rigid surface feature matching of human
from 3d video. Journal of Virtual Reality and Broad-
casting, 7(2010)(3).
Flagg, M., Nakazawa, A., Zhang, Q., Kang, S. B., Ryu,
Y. K., Essa, I., and Rehg, J. M. (2009). Human video
textures. In Symposium on Interactive 3D Graphics
and Games, pages 199–206.
Hsieh, M.-K., Chen, B.-Y., and Ouhyoung, M. (2005). Mo-
tion retargeting and transition in different articulated
figures. In 9th Inter. Conf. on Computer Aided Design
and Computer Graphics.
Huang, P., Hilton, A., and Starck, J. (2009). Human motion
synthesis from 3D video. In IEEE Conf. on Computer
Vision and Pattern Recognition.
Kircher, S. and Garland, M. (2008). Free-form motion pro-
cessing. ACM Transactions on Graphics, 27(2):1–13.
Kovar, L., Gleicher, M., and Pighin, F. (2002). Motion
graphs. In ACM SIGGRAPH.
Lowe, D. (2003). Distinctive image features from scale-
invariant keypoints. Inter. Journal of Computer Vision,
20:91–110.
Sch¨odl, A., Szeliski, R., Salesin, D., and Essa, I. A. (2000).
Video textures. In ACM SIGGRAPH, pages 489–498.
Sorkine, O. (2006). Differential representations for mesh
processing. Computer Graphics Forum, 25(4):789–
807.
Starck, J. and Hilton, A. (2007). Surface capture for per-
formance based animation. Computer Graphics and
Applications, 27(3):21–31.
Starck, J., Miller, G., and Hilton, A. (2005). Video-based
character animation. In Symposium on Computer An-
imation.
SIGMAP2014-InternationalConferenceonSignalProcessingandMultimediaApplications
78