6 CONCLUSIONS AND FUTURE
WORK
Applying depth heuristics for the purposes of image
warping to synthesise views in light field volume ren-
dering produces good results and we recommend this
as a first step for this problem. Additionally, learning
a residual light field improves the visual consistency
of the geometrically based warping function, and is
useful for views far away from the reference view.
Our light field synthesis is fast compared to existing
methods but is still too slow to compete with direct
volume rendering in many cases. However, in con-
trast to light field volume rendering, the time for our
synthesis is independent of the volume resolution and
rendering effects and only depends on the resolution
of the sample volume rendered image.
Our view synthesis results for light field volume
rendering are of high quality and deep learning can
be effectively be applied to this problem, but the geo-
metrical image warping bottleneck prevents synthesis
at interactive rates.
In future work, we would be keen to consider
more datasets and transfer functions with various lev-
els of transparency to help generalise this approach.
We would also be interested in investigating further
image warping procedures to identify potential opti-
misations. A possible technique for effective synthe-
sis may be to use multiple depth heuristics and a CNN
to combine them into one depth map. Moreover, in-
corporating additional volume information alongside
a depth map and a volume rendered view could be
beneficial. Given the expense of 3D CNNs learning
over volumes, we expect that 2D CNNs learning from
multiple images are likely to dominate in future years
on volumetric data.
ACKNOWLEDGEMENTS
This research has been conducted with the financial
support of Science Foundation Ireland (SFI) under
Grant Number 13/IA/1895.
REFERENCES
Frayne, S. (2018). The looking glass. https:
//lookingglassfactory.com/. Accessed:
22/11/2018.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Kalantari, N. K., Wang, T.-C., and Ramamoorthi, R. (2016).
Learning-based view synthesis for light field cameras.
ACM Transactions on Graphics, 35(6):193:1–193:10.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. In Advances in neural information process-
ing systems, pages 1097–1105.
Lanman, D. and Luebke, D. (2013). Near-eye light field
displays. ACM Transactions on Graphics (TOG),
32(6):220.
Levoy, M. and Hanrahan, P. (1996). Light Field Render-
ing. In Proceedings of the 23rd annual conference on
Computer graphics and interactive techniques, SIG-
GRAPH ’96, pages 31–42. ACM.
Lim, B., Son, S., Kim, H., Nah, S., and Lee, K. M. (2017).
Enhanced deep residual networks for single image
super-resolution. In The IEEE conference on com-
puter vision and pattern recognition (CVPR) work-
shops, volume 1, page 4.
Lochmann, G., Reinert, B., Buchacher, A., and Ritschel,
T. (2016). Real-time Novel-view Synthesis for Vol-
ume Rendering Using a Piecewise-analytic Represen-
tation. In Vision, Modeling and Visualization. The Eu-
rographics Association.
Loshchilov, I. and Hutter, F. (2017). SGDR: Stochastic
gradient descent with warm restarts. In International
Conference on Learning Representations.
Mark, W. R., McMillan, L., and Bishop, G. (1997). Post-
rendering 3d warping. In Proceedings of the 1997
symposium on Interactive 3D graphics, pages 7–16.
ACM.
Mueller, K., Shareef, N., Huang, J., and Crawfis, R. (1999).
Ibr-assisted volume rendering. In Proceedings of
IEEE Visualization, volume 99, pages 5–8. Citeseer.
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E.,
DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and
Lerer, A. (2017). Automatic differentiation in pytorch.
Penner, E. and Zhang, L. (2017). Soft 3d reconstruction
for view synthesis. ACM Transactions on Graphics,
36(6):235:1–235:11.
Qi, C. R., Su, H., Nießner, M., Dai, A., Yan, M., and
Guibas, L. J. (2016). Volumetric and multi-view cnns
for object classification on 3d data. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 5648–5656.
Roettger, S. (2018a). Head volume dataset.
http://schorsch.efi.fh-nuernberg.de/data/
volume/MRI-Head.pvm.sav. Accessed: 24/08/2018.
Roettger, S. (2018b). Heart volume dataset.
http://schorsch.efi.fh-nuernberg.de/
data/volume/Subclavia.pvm.sav. Accessed:
15/08/2018.
Shi, L., Hassanieh, H., Davis, A., Katabi, D., and Durand,
F. (2014). Light field reconstruction using sparsity in
the continuous fourier domain. ACM Transactions on
Graphics, 34(1):1–13.
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Srinivasan, P. P., Wang, T., Sreelal, A., Ramamoorthi, R.,
and Ng, R. (2017). Learning to sythesize a 4d rgbd
light field from a single image. In IEEE International
Using a Depth Heuristic for Light Field Volume Rendering
143