ACKNOWLEDGEMENTS
The authors thank Coordenac¸
˜
ao de Aperfeic¸oamento
de Pessoal de N
´
ıvel Superior (CAPES) for the finan-
cial support of this work and Nvidia for providing
GPUs.
REFERENCES
Boom, B. J., Orts-Escolano, S., Ning, X. X., McDonagh,
S., Sandilands, P., and Fisher, R. B. (2015). Interac-
tive light source position estimation for augmented re-
ality with an rgb-d camera. Computer Animation and
Virtual Worlds.
Calian, D. A., Mitchell, K., Nowrouzezahrai, D., and Kautz,
J. (2013). The shading probe: Fast appearance acqui-
sition for mobile ar. In SIGGRAPH Asia 2013 Techni-
cal Briefs, page 20. ACM.
Debevec, P. (2005). Image-based lighting. In ACM SIG-
GRAPH 2005 Courses, page 3. ACM.
Debevec, P., Graham, P., Busch, J., and Bolas, M. (2012).
A single-shot light probe. In ACM SIGGRAPH 2012
Talks, page 10. ACM.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,
L. (2009). Imagenet: A large-scale hierarchical image
database. In Computer Vision and Pattern Recogni-
tion, 2009. CVPR 2009. IEEE Conference on, pages
248–255. IEEE.
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M.,
Blau, H. M., and Thrun, S. (2017). Dermatologist-
level classification of skin cancer with deep neural net-
works. Nature, 542(7639):115–118.
Gonz
´
alez,
´
A. (2010). Measurement of areas on a sphere
using fibonacci and latitude–longitude lattices. Math-
ematical Geosciences, 42(1):49–64.
Han, J., Shao, L., Xu, D., and Shotton, J. (2013). Enhanced
computer vision with microsoft kinect sensor: A re-
view. IEEE transactions on cybernetics, 43(5):1318–
1334.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Hold-Geoffroy, Y., Sunkavalli, K., Hadap, S., Gambaretto,
E., and Lalonde, J.-F. (2016). Deep outdoor illumina-
tion estimation. arXiv preprint arXiv:1611.06403.
Hosek, L. and Wilkie, A. (2012). An analytic model for
full spectral sky-dome radiance. ACM Transactions
on Graphics (TOG), 31(4):95.
Jiddi, S., Robert, P., and Marchand, E. (2016). Reflectance
and illumination estimation for realistic augmenta-
tions of real scenes. In IEEE Int. Symp. on Mixed and
Augmented Reality, ISMAR’16 (poster session).
Jimenez, J. and Gutierrez, D. (2010). GPU Pro: Advanced
Rendering Techniques, chapter Screen-Space Subsur-
face Scattering, pages 335–351. AK Peters Ltd.
Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson,
J. P., Kane, A. D., Menon, D. K., Rueckert, D., and
Glocker, B. (2017). Efficient multi-scale 3d cnn with
fully connected crf for accurate brain lesion segmen-
tation. Medical image analysis, 36:61–78.
Knecht, M., Traxler, C., Mattausch, O., and Wimmer, M.
(2012). Reciprocal shading for mixed reality. Com-
puters & Graphics, 36(7):846–856.
Lalonde, J.-F., Efros, A. A., and Narasimhan, S. G. (2012).
Estimating the natural illumination conditions from a
single outdoor image. International Journal of Com-
puter Vision, 98(2):123–145.
Lalonde, J.-F., Narasimhan, S. G., and Efros, A. A. (2010).
What do the sun and the sky tell us about the camera?
International Journal of Computer Vision, 88(1):24–
51.
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P.,
Ramanan, D., Doll
´
ar, P., and Zitnick, C. L. (2014).
Microsoft coco: Common objects in context. In Euro-
pean conference on computer vision, pages 740–755.
Springer.
Liu, X., Liang, W., Wang, Y., Li, S., and Pei, M. (2016). 3d
head pose estimation with convolutional neural net-
work trained on synthetic images. In Image Process-
ing (ICIP), 2016 IEEE International Conference on,
pages 1289–1293. IEEE.
Mandl, D., Yi, K. M., Mohr, P., Roth, P., Fua, P., Lep-
etit, V., Schmalstieg, D., and Kalkofen, D. (2017).
Learning lightprobes for mixed reality illumination.
In 16th IEEE International Symposium on Mixed and
Augmented Reality (ISMAR), number EPFL-CONF-
229470.
Marin, G., Dominio, F., and Zanuttigh, P. (2014). Hand ges-
ture recognition with leap motion and kinect devices.
In Image Processing (ICIP), 2014 IEEE International
Conference on, pages 1565–1569. IEEE.
Marques, R., Bouville, C., Ribardi
`
ere, M., Santos, L. P.,
and Bouatouch, K. (2013). Spherical fibonacci point
sets for illumination integrals. In Computer Graph-
ics Forum, volume 32, pages 134–143. Wiley Online
Library.
Pessoa, S. A., Moura, G. d. S., Lima, J. P. S. d. M., Te-
ichrieb, V., and Kelner, J. (2012). Rpr-sors: Real-time
photorealistic rendering of synthetic objects into real
scenes. Computers & Graphics, 36(2):50–69.
Rajpura, P. S., Hegde, R. S., and Bojinov, H. (2017). Object
detection using deep cnns trained on synthetic images.
arXiv preprint arXiv:1706.06782.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Xie, S., Girshick, R., Doll
´
ar, P., Tu, Z., and He, K. (2016).
Aggregated residual transformations for deep neural
networks. arXiv preprint arXiv:1611.05431.
Zhang, Z. (2012). Microsoft kinect sensor and its effect.
IEEE multimedia, 19(2):4–10.
Deep Light Source Estimation for Mixed Reality
311