Kingma, D. P. and Ba, J. (2015). Adam: A method for
stochastic optimization. In Proceedings of the 3rd In-
ternational Conference on Learning Representations
(ICLR), San Diego.
Klokov, R. and Lempitsky, V. (2017). Escape from cells:
Deep kd-networks for the recognition of 3d point
cloud models. In Computer Vision (ICCV), 2017 IEEE
International Conference on, pages 863–872. IEEE.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012).
Imagenet classification with deep convolutional neu-
ral networks. In Pereira, F., Burges, C. J. C., Bottou,
L., and Weinberger, K. Q., editors, Advances in Neu-
ral Information Processing Systems 25, pages 1097–
1105. Curran Associates, Inc.
Lanman, D. and Luebke, D. (2013). Near-eye light field dis-
plays. In ACM SIGGRAPH 2013 Emerging Technolo-
gies, SIGGRAPH ’13, pages 11:1–11:1, New York,
NY, USA. ACM.
Levoy, M. and Hanrahan, P. (1996). Light field render-
ing. In Proceedings of the 23rd Annual Conference
on Computer Graphics and Interactive Techniques,
SIGGRAPH ’96, pages 31–42, New York, NY, USA.
ACM.
Li, Y., Pirk, S., Su, H., Qi, C. R., and Guibas, L. J. (2016).
Fpnn: Field probing neural networks for 3d data.
In Proceedings of the 30th International Conference
on Neural Information Processing Systems, NIPS’16,
pages 307–315, USA. Curran Associates Inc.
Liu, Z., Yeh, R. A., Tang, X., Liu, Y., and Agarwala, A.
(2017). Video frame synthesis using deep voxel flow.
In 2017 IEEE International Conference on Computer
Vision (ICCV), pages 4473–4481.
Maturana, D. and Scherer, S. (2015). Voxnet: A 3d convolu-
tional neural network for real-time object recognition.
In 2015 IEEE/RSJ International Conference on Intel-
ligent Robots and Systems (IROS), pages 922–928.
Mora, B., Maciejewski, R., Chen, M., and Ebert, D. S.
(2009). Visualization and computer graphics on
isotropically emissive volumetric displays. IEEE
Transactions on Visualization and Computer Graph-
ics, 15(2):221–234.
Niklaus, S., Mai, L., and Liu, F. (2017). Video frame inter-
polation via adaptive separable convolution. In 2017
IEEE International Conference on Computer Vision
(ICCV), pages 261–270.
Park, E., Yang, J., Yumer, E., Ceylan, D., and Berg, A. C.
(2017). Transformation-grounded image generation
network for novel 3d view synthesis. In 2017 IEEE
Conference on Computer Vision and Pattern Recogni-
tion (CVPR), pages 702–711.
Philips, S., Hlawitschka, M., and Scheuermann, G. (2018).
Slice-based visualization of brain fiber bundles - a lic-
based approach. pages 281–288.
Qi, C. R., Su, H., Niessner, M., Dai, A., Yan, M., and
Guibas, L. J. (2016). Volumetric and multi-view cnns
for object classification on 3d data. arXiv:1604.03265
[cs].
Qi, C. R., Yi, L., Su, H., and Guibas, L. J. (2017). Point-
net++: Deep hierarchical feature learning on point sets
in a metric space. arXiv:1706.02413 [cs].
Riegler, G., Ulusoy, A. O., and Geiger, A. (2017). Octnet:
Learning deep 3d representations at high resolutions.
In The IEEE Conference on Computer Vision and Pat-
tern Recognition (CVPR).
Smola, A. J. and Schlkopf, B. (2004). A tutorial on sup-
port vector regression. Statistics and Computing,
14(3):199–222.
Srinivasan, P. P., Wang, T., Sreelal, A., Ramamoorthi, R.,
and Ng, R. (2017). Learning to synthesize a 4d rgbd
light field from a single image. In 2017 IEEE Interna-
tional Conference on Computer Vision (ICCV), pages
2262–2270.
Su, H., Maji, S., Kalogerakis, E., and Learned-Miller, E.
(2015). Multi-view convolutional neural networks for
3d shape recognition. In Proceedings of the 2015
IEEE International Conference on Computer Vision
(ICCV), ICCV ’15, pages 945–953, Washington, DC,
USA. IEEE Computer Society.
Sunden, E., Steneteg, P., Kottravel, S., Jonsson, D., En-
glund, R., Falk, M., and Ropinski, T. (2015). Inviwo
- an extensible, multi-purpose visualization frame-
work. In 2015 IEEE Scientific Visualization Confer-
ence (SciVis), pages 163–164.
Wang, P., Li, W., Gao, Z., Zhang, Y., Tang, C., and Ogun-
bona, P. (2017a). Scene flow to action map: A new
representation for rgb-d based action recognition with
convolutional neural networks. In 2017 IEEE Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 416–425.
Wang, P.-S., Liu, Y., Guo, Y.-X., Sun, C.-Y., and Tong, X.
(2017b). O-cnn: Octree-based convolutional neural
networks for 3d shape analysis. ACM Trans. Graph.,
36(4):72:1–72:11.
Wetzstein, G., Lanman, D., Hirsch, M., and Raskar, R.
(2012). Tensor displays: Compressive light field syn-
thesis using multilayer displays with directional back-
lighting. ACM Trans. Graph., 31(4):80:1–80:11.
Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X.,
and Xiao, J. (2015). 3d shapenets: A deep representa-
tion for volumetric shapes. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recogni-
tion, pages 1912–1920.
Zhou, T., Tulsiani, S., Sun, W., Malik, J., and Efros, A. A.
(2016). View synthesis by appearance flow. In Com-
puter Vision - ECCV 2016, Lecture Notes in Computer
Science, pages 286–301. Springer, Cham.
Synthesising Light Field Volumetric Visualizations in Real-time using a Compressed Volume Representation
105