Authors:
Seán Bruton
;
David Ganter
and
Michael Manzke
Affiliation:
Graphics, Vision and Visualisation (GV2), Trinity College Dublin, the University of Dublin and Ireland
Keyword(s):
Volumetric Data, Visualization, View Synthesis, Light Field, Convolutional Neural Network.
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Gpu-Based Visualization
;
Scientific Visualization
;
Spatial Data Visualization
;
Volume Visualization
Abstract:
Light field display technology will permit visualization applications to be developed with enhanced perceptual qualities that may aid data inspection pipelines. For interactive applications, this will necessitate an increase in the total pixels to be rendered at real-time rates. For visualization of volumetric data, where ray-tracing techniques dominate, this poses a significant computational challenge. To tackle this problem, we propose a deep-learning approach to synthesise viewpoint images in the light field. With the observation that image content may change only slightly between light field viewpoints, we synthesise new viewpoint images from a rendered subset of viewpoints using a neural network architecture. The novelty of this work lies in the method of permitting the network access to a compressed volume representation to generate more accurate images than achievable with rendered viewpoint images alone. By using this representation, rather than a volumetric representation, m
emory and computation intensive 3D convolution operations are avoided. We demonstrate the effectiveness of our technique against newly created datasets for this viewpoint synthesis problem. With this technique, it is possible to synthesise the remaining viewpoint images in a light field at real-time rates.
(More)