and real-time blending of digital characters into real
world videos.
REFERENCES
Aberman, K., Shi, M., Liao, J., Lischinski, D., Chen, B.,
and Cohen-Or, D. (2018). Deep video-based perfor-
mance cloning.
Buehler, C., Bosse, M., McMillan, L., Gortler, S., and
Cohen, M. (2001). Unstructured lumigraph render-
ing. In Proceedings of the 28th Annual Conference on
Computer Graphics and Interactive Techniques, SIG-
GRAPH ’01, pages 425–432, New York, NY, USA.
ACM.
Carranza, J., Theobalt, C., Magnor, M. A., and Seidel, H.-
P. (2003). Free-viewpoint video of human actors.
In ACM SIGGRAPH 2003 Papers, SIGGRAPH ’03,
pages 569–577, New York, NY, USA. ACM.
Casas, D., Volino, M., Collomosse, J. P., and Hilton, A.
(2014). 4d video textures for interactive character ap-
pearance. Comput. Graph. Forum, 33:371–380.
Catmull, E. E. (1974). A Subdivision Algorithm for
Computer Display of Curved Surfaces. PhD thesis.
AAI7504786.
Chan, C., Ginosar, S., Zhou, T., and Efros, A. A. (2018).
Everybody dance now.
Debevec, P. E., Taylor, C. J., and Malik, J. (1996). Mod-
eling and rendering architecture from photographs: A
hybrid geometry- and image-based approach. In Pro-
ceedings of the 23rd Annual Conference on Computer
Graphics and Interactive Techniques, SIGGRAPH
’96, pages 11–20, New York, NY, USA. ACM.
Eslami, S. M. A., Jimenez Rezende, D., Besse, F., Vi-
ola, F., Morcos, A. S., Garnelo, M., Ruderman, A.,
Rusu, A. A., Danihelka, I., Gregor, K., Reichert,
D. P., Buesing, L., Weber, T., Vinyals, O., Rosenbaum,
D., Rabinowitz, N., King, H., Hillier, C., Botvinick,
M., Wierstra, D., Kavukcuoglu, K., and Hassabis, D.
(2018). Neural scene representation and rendering.
Science, 360(6394):1204–1210.
Germann, M., Sorkine-Hornung, A., Keiser, R., Ziegler, R.,
W
¨
urmlin, S., and Gross, M. H. (2010). Articulated
billboards for video-based rendering. Comput. Graph.
Forum, 29:585–594.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In Ghahra-
mani, Z., Welling, M., Cortes, C., Lawrence, N. D.,
and Weinberger, K. Q., editors, Advances in Neu-
ral Information Processing Systems 27, pages 2672–
2680. Curran Associates, Inc.
Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2016).
Image-to-image translation with conditional adversar-
ial networks. arxiv.
Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual
losses for real-time style transfer and super-resolution.
In European Conference on Computer Vision.
Karras, T., Aila, T., Laine, S., and Lehtinen, J. (2017). Pro-
gressive growing of gans for improved quality, stabil-
ity, and variation. CoRR, abs/1710.10196.
Levoy, M. and Hanrahan, P. (1996). Light field render-
ing. In Proceedings of the 23rd Annual Conference on
Computer Graphics and Interactive Techniques, SIG-
GRAPH ’96, page 31–42, New York, NY, USA. As-
sociation for Computing Machinery.
Liu, L., Xu, W., Zollhoefer, M., Kim, H., Bernard, F.,
Habermann, M., Wang, W., and Theobalt, C. (2018).
Neural rendering and reenactment of human actor
videos.
Lombardi, S., Simon, T., Saragih, J. M., Schwartz, G.,
Lehrmann, A. M., and Sheikh, Y. (2019). Neural
volumes: Learning dynamic renderable volumes from
images. CoRR, abs/1906.07751.
Martin-Brualla, R., Pandey, R., Yang, S., Pidlypenskyi,
P., Taylor, J., Valentin, J. P. C., Khamis, S., David-
son, P. L., Tkach, A., Lincoln, P., Kowdle, A., Rhe-
mann, C., Goldman, D. B., Keskin, C., Seitz, S. M.,
Izadi, S., and Fanello, S. R. (2018). Lookingood: En-
hancing performance capture with real-time neural re-
rendering. CoRR, abs/1811.05029.
Meshry, M., Goldman, D. B., Khamis, S., Hoppe, H.,
Pandey, R., Snavely, N., and Martin-Brualla, R.
(2019). Neural rerendering in the wild. CoRR,
abs/1904.04290.
Phong, B. T. (1975). Illumination for computer generated
pictures. Commun. ACM, 18(6):311–317.
Sch
¨
odl, A. and Essa, I. A. (2002). Controlled animation of
video sprites. In Proceedings of the 2002 ACM SIG-
GRAPH/Eurographics Symposium on Computer An-
imation, SCA ’02, page 121–127, New York, NY,
USA. Association for Computing Machinery.
Sch
¨
odl, A., Szeliski, R., Salesin, D. H., and Essa, I. (2000).
Video textures. In Proceedings of the 27th Annual
Conference on Computer Graphics and Interactive
Techniques, SIGGRAPH ’00, page 489–498, USA.
ACM Press/Addison-Wesley Publishing Co.
Shysheya, A., Zakharov, E., Aliev, K.-A., Bashirov, R.,
Burkov, E., Iskakov, K., Ivakhnenko, A., Malkov, Y.,
Pasechnik, I., Ulyanov, D., Vakhitov, A., and Lempit-
sky, V. (2019). Textured neural avatars.
Thies, J., Zollh
¨
ofer, M., and Nießner, M. (2019). Deferred
neural rendering: Image synthesis using neural tex-
tures.
Volino, M., Casas, D., Collomosse, J., and Hilton, A.
(2014). Optimal representation of multiple view
video. In Proceedings of the British Machine Vision
Conference. BMVA Press.
Xu, F., Liu, Y., Stoll, C., Tompkin, J., Bharaj, G., Dai,
Q., Seidel, H.-P., Kautz, J., and Theobalt, C. (2011).
Video-based characters: Creating new human perfor-
mances from a multi-view video database. In ACM
SIGGRAPH 2011 Papers, SIGGRAPH ’11, pages
32:1–32:10, New York, NY, USA. ACM.
Rig-space Neural Rendering: Compressing the Rendering of Characters for Previs, Real-time Animation and High-quality Asset Re-use
307