2 PREVIOUS WORK
There are several ways of combining 2D multimodal
images: compositing them using color scales and α-
blending (Hill et al., 1993); interleaving alternate pix-
els with independent color scales (Rehm et al., 1994)
and alternating the display of the two modality images
in synchronization with the monitor scanning so that
it induces the fusion of images in the human visual
system (Lee et al., 2000). Although these techniques
have proven to be useful, they leave to the observer
the task of mentally reconstructing the relationships
between the 3D structures. Three-dimensional mul-
timodal rendering provides this perception and it can
also be used to help users to freely select adequate
image orientations for 2D analysis.
Current 3D multimodality rendering methods can
be classified into four categories (Stokking et al.,
2003): weighted data fusion, multimodal window dis-
play, integrated data display and surface mapping.
The first technique (Cai and Sakas, 1999), (Ferr´e
et al., 2004) merges data according to specific weights
at different stages of the rendering process: from
property values (property fusion) to final colors (color
fusion). The second category is a particular case of
the first one, that uses weight values of 0 and 1, in
order to substitute parts of one modality by the other
one (Stokking et al., 1994). The integrated data dis-
play consists of extracting a polygonal surface model
from one modality and rendering it integrated with the
other data (Viergever et al., 1992). The main limita-
tion of this method is the lack of flexibility of the sur-
face extraction pre-process. Finally, surface mapping
maps one modality onto an isosurface of the other.
A typical example is painting functional data onto an
MR brain surface (Payne and Toga, 1990). The draw-
back of this approach is that it only shows a small
amount of relevant information. The Normal Fusion
technique (Stokking et al., 1997) enhances surface
mapping by sampling the functional modality along
an interval in rays perpendicular to the surface.
Most of these techniques have been implemented
with volume ray-casting (Zuiderveld et al., 1996)
(Cai and Sakas, 1999), because it naturally supports
pre-registered non-aligned volume models. An effi-
cient splatting of run-length encoded aligned multi-
modalities has been proposed by Ferr´e et al. (Ferr´e
et al., 2006). The major drawback of these meth-
ods is that they are software-based, and therefore,
they are not fast enough to provide the interactiv-
ity needed by physicians to analyze the data. Tex-
ture mapping (Kr¨uger and Westerman, 2003) can pro-
vide this speed because it exploits hardware graphics
acceleration. Moreover, the programmability of to-
day’s graphic cards provides flexibility to merge mul-
timodal data. Hong et al. (Hong et al., 2005) use
3D texture-based rendering for multimodality. They
use aligned textures in order to use the same 3D co-
ordinates to fetch the texture values in the two mod-
els and combine the texel values according to three
different operators. None of the previous papers treat
time-varying data. This is a strong limitation, because
each time there is major interest for the observation of
properties that vary through time, such as the cerebral
or cardiac activity.
The visualization of time-varying datasets has
been addressed following two main approaches: to
treat time-varying data as an n − D model with n = 4
(Neophytou and Mueller, 2002), or to separate the
time dimension from the spatial ones. In the sec-
ond approach, at each frame, the data values cor-
responding to that instant of time must be loaded.
Reinhard et al. (Reinhard et al., 2002) have ad-
dressed the I/O bottleneck of time-varying fields in
the context of ray-casting isosurfaces. They parti-
tion each time step into a number of files containing
a small range of iso-values. They use a multiproces-
sor architecture such that, during rendering, while one
processor reads the next time step, the other ones
render the data currently in memory. Binotto et al.
(Binotto et al., 2003) propose to compress highly co-
herent time-varying datasets into 3D textures using a
simple indexing scheme mechanism that can be im-
plemented using fragment shaders. Younesy et al.
(Younesy et al., 2005) accelerate data load at each
frame using a differential histogram table that takes
into account data coherence. Aside from data load-
ing, frame-to-frame coherence can also be taken into
account to speed up the rendering step itself. Sev-
eral authors have exploited it in ray-casting (Shen
and Johnson, 1994) (Ma et al., 1998) (Liao et al.,
2002), shear-warp (Anagnostou et al., 2000), texture-
mapping (Ellsworth et al., 2000) (Lum et al., 2002)
(Schneider and Westermann, 2003) (Binotto et al.,
2003) and splatting (Younesy et al., 2005).
In this paper, we propose to use 3D texture map-
ping to perform multimodal rendering for both sta-
tic and time-varying modalities. For the latter type of
data, we propose an efficient compression mechanism
based on run-length encoding data through time.
3 OVERVIEW
Figure 1 shows the pipeline of our method. We codify
the time-varying data in a 2D texture (Time Codes 2D
Texture) that we use, at each frame, to update a 3D
texture (Time-Varying Data 3D Texture). This mech-
GRAPP 2008 - International Conference on Computer Graphics Theory and Applications
224