Authors:
Pascual Abellán
;
Sergi Grau
and
Dani Tost
Affiliation:
Divisió Informàtica CREB, UPC, Spain
Keyword(s):
Volume rendering, Multimodality, Time-varying Data, Frame-to-frame coherence, 3D texture mapping.
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Rendering
;
Volume Rendering
Abstract:
In this paper, we propose a rendering method for multimodal and time-varying data based on 3D texture mapping. Our method takes as input two registered voxel models: one with static data and the other with time-varying values. It visualizes the fusion of data through time steps of different sizes, forward and backward. At each frame we use one 3D texture for each modality. We compute and compose a set of view-aligned texture slices. For each texel of a slice, we perform a fetch to each 3D texture and realize fusion and shading using a fragment shader. We codify the two shading transfer functions on auxiliary 1D textures. Moreover, the weight of each modality in fusion is not constant but defined through a 2D fusion transfer function implemented as a 2D texture. We benefit from
frame-to-frame coherence to avoid reloading the time-varying data texture at each frame. Instead, we update it at each frame using a 2D texture that run-length encodes the variation of property values through
time. The 3D texture updating is done entirely on the GPU, which significantly speeds up rendering. Our method is fast and versatile and it provides a good insight into multimodal data.
(More)