Rendering Synthetic Objects into Full Panoramic Scenes using
Light-depth Maps
Aldo Ren
´
e Zang
1
, Dalai Felinto
2
and Luiz Velho
1
1
Visgraf Laboratory, Institute of Pure and Applied Mathematics (IMPA), Rio de Janeiro, Brazil
2
Fisheries Centre, University of British Columbia, Vancouver, Canada
Keywords:
Augmented Reality, Photorealistic Rendering, HDRI, Light-depth Map, 3D Modeling, Full Panorama.
Abstract:
This photo realistic rendering solution address the insertion of computer generated elements in a captured
panorama environment. This pipeline supports productions specially aiming at spherical displays (e.g., full-
domes). Full panoramas have been used in computer graphics for years, yet their common usage lays on
environment lighting and reflection maps for conventional displays. With a keen eye in what may be the next
trend in the filmmaking industry, we address the particularities of those productions, proposing a new repre-
sentation of the space by storing the depth together with the light maps, in a full panoramic light-depth map.
Another novelty in our rendering pipeline is the one-pass solution to solve the blending of real and synthetic
objects simultaneously without the need of post processing effects.
1 INTRODUCTION
In the recent years, we have seen an increase demand
for immersive panorama productions. We believe this
is a future for cinema innovation.
We wanted to validate a workflow to work from
panorama capturing, work the insertion of digital el-
ements to build a narrative, and bring it back to the
panorama space. There is no tool in the market right
now ready to account for the complete framework.
In order to address that, we presented in a pre-
vious work an end-to-end framework to combine
panorama capture and rendered elements. The com-
plete pipeline involves all the aspects of the environ-
ment capture, and the needed steps to work with a
custom light-path algorithm that can handle this in-
formation (Felinto et al., 2012).
The original problem we faced was that full
panoramas are directional maps. They are commonly
used in the computer graphics industry for environ-
ment lighting and limited reflection maps. And they
work fine if you are to use them without having to
account for a full coherent space. However if a full
panorama is the output format, we need more than
the previous works can provide.
In that work (Felinto et al., 2012) we presented the
concept of light-depth environment maps - a special
environment light field map with a depth channel used
to compute the position of light samples in real world.
The rendering solution to deal with a light-depth
environment map, however, is non trivial. We here
propose a solution for panoramic photo-realistic ren-
dering of synthetic objects inserted in a real envi-
ronment using a single-pass path tracing algorithm.
We generate an approximation of the relevant envi-
ronment geometries to get a complete simulation of
shadows and reflections of the environment and the
synthetic elements.
Figure 1: The radiance channel of the environment and the
depth channel used for reconstruct the light positions.
2 LIGHT-DEPTH MAP
A light-depth map contains both radiance and the
spatial displacement (i.e., depth) of the environment
light. The traditional approach for an environment
map is to take it as a set of infinite-distant or direc-
tional lights. In this new approach the map gives in-
formation about the geometry of the environment, so
we can consider it as a set of point lights instead of
directional lights.
209
René Zang A., Felinto D. and Velho L..
Rendering Synthetic Objects into Full Panoramic Scenes using Light-depth Maps.
DOI: 10.5220/0004216602090212
In Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information
Visualization Theory and Applications (GRAPP-2013), pages 209-212
ISBN: 978-989-8565-46-4
Copyright
c
2013 SCITEPRESS (Science and Technology Publications, Lda.)