Figure 2: The Stanford dragon model in the entrance hall.
Figure 3: Shadows and occlusion are handled via differen-
tial rendering and reconstructed geometry.
rameters than non-linear parts. While ρ
d
was evalu-
ated correctly in most cases, i.e. no mutant or local
minima, the deviations in ρ
s
and especially in n were
generally too high. It is still unclear whether a larger
population or higher mutation rates will lead to better
results. It should be noted that these test cases exclu-
sively deal with known BRDF’s and do not contain
any lighting information from an image whatsoever,
neither virtual nor real lights. In the actual implemen-
tation, the process iteratively factors out virtual light
sources. Ultimately, the calculation of the BRDF pa-
rameters that follows this estimation is replaced by
the genetic algorithm.
7 FUTURE IMPROVEMENTS
The most urgent matter right now is to have a uni-
fied model for creating irradiance maps, because the
currently used spherical harmonics for instance are
unsuitable for high-frequency functions. Relating to
actual reflection model parameters such as those of
specular functions will then be much easier. Cur-
rently, Haar Wavlets show promising results, because
the multi-resolution analysis allows to capture high
frequencies with relatively few coefficients. Ambi-
ent occlusion as a placeholder for other surface func-
tions is sufficient right now. However, special effects
such as interreflections or caustics are currently not
handled. A suitable and dynamic method compara-
ble to LDPRT (Sloan et al., 2005) has to be included
in the near future. Also, the current approach to ex-
tract light sources manually from sphere maps does
set heavy boundaries to the dynamic usage. A sta-
ble real time approach to extract lights from HDR
sphere maps such as in (Supan and Stuppacher, 2006)
or (Korn et al., 2006) still has to be implemented.
REFERENCES
Debevec, P. (1998). Rendering synthetic objects into real
scenes: Bridging traditional and image-based graph-
ics with global illumination and high dynamic range
photography. In Proceedings of SIGGRAPH 98, CG
Proc., Annual Conf. Series, pages 189–198.
Gibson, S., Howard, T. J., and Hubbold, R. J. (2001). Flexi-
ble image-based photometric reconstruction using vir-
tual light sources. CG Forum (Proc. Eurographics
2001, Manchester, UK), 20(3):C203–C214.
Grosch, T. (2005). Differential photon mapping: Consis-
tent augmentation of photographs with correction of
all light paths. In Alexa, M. and Marks, J., editors,
Eurographics 2005, EG Short Pres., pages 53–56.
IR (2007). Avalon. http://www.instantreality.org/.
Jung, Y., Franke, T., D¨ahne, P., and Behr, J. (2007). En-
hancing x3d for advanced mr appliances. In Web3D
’07: Proc. of the 12th int. conference on 3D web tech-
nology, pages 27–36, New York, USA. ACM Press.
Kautz, J., Daubert, K., and Seidel, H.-P. (2004). Advanced
environment mapping in vr applications. Computers
& Graphics, 28(1):99–104.
King, G. (2005). Real-time computation of dynamic irra-
diance environment maps. In Pharr, M., editor, GPU
Gems 2, chapter 10, pages 167–176. Addison Wesley.
Korn, M., Stange, M., von Arb, A., Blum, L., Kreil, M.,
Kunze, K.-J., Anhenn, J., Wallrath, T., and Grosch, T.
(2006). Interactive augmentation of live images using
a hdr stereo camera. Koblenz, Germany.
OpenSG (2007). Opensg. http://opensg.vrsource.org/trac.
Sattler, M., Sarlette, R., Zachmann, G., and Klein, R.
(2004). Hardware-accelerated ambient occlusion
computation. In Girod, B., Magnor, M., and Sei-
del, H.-P., editors, Vision, Modeling, and Visualizatio,
pages 331–338. Akad. Verl. Aka GmbH, Berlin.
Sloan, P.-P., Luna, B., and Snyder, J. (2005). Local,
deformable precomputed radiance transfer. In SIG-
GRAPH ’05: ACM SIGGRAPH 2005 Papers, pages
1216–1224, New York, NY, USA. ACM Press.
Supan, P. and Stuppacher, I. (2006). Interactive image based
lighting in augmented reality. Central European Sem-
inar on Computer Graphics.
GRAPP 2008 - International Conference on Computer Graphics Theory and Applications
252