We use four different projection types (parallel,
perspective, axial and planar) which are chosen on
a cell-basis, according to the local neighborhood of
the cell. These projections map any point (x, y,z) in-
side the cubic cell onto a point (x
0
,y
0
,z
0
) on one of its
faces, which in turn can be expressed in local coordi-
nates (u,v) inside the face, see Figure 1(c).
Given an arbitrary surface mesh, we compute a
set of cubic cells and, using the implicit parameteri-
zation, we sample the input mesh to fill in the 2D ar-
rays with attributes such as distance, normal and color
data. Each attribute is stored as an array texture. This
provides a unified representation of the surface mesh
which allows extracting a level-of-detail approxima-
tion of the original surface. Like geometry images,
our representation can be rendered using a geometry-
based approach suitable for rasterization-based visu-
alization, which involves unprojecting each texel us-
ing the implicit projection and the stored distance
value. Moreover, a unique feature of our represen-
tation is that it is amenable for raycasting rendering,
using a GPU-based algorithm with a pattern similar to
that of relief mapping algorithms. This last rendering
approach makes our representation specially suitable
for raytracing in the GPU.
The rest of the paper is organized as follows.
Section 2 reviews previous work on implicit pa-
rameterizations and relief mapping techniques. Our
projection-based parameterization and the way we
use it to construct and render detailed meshes are dis-
cussed in Section 3. Section 4 discusses our results
on several test models. Finally, Section 5 provides
concluding remarks and future work.
2 PREVIOUS WORK
In recent years a number of methods for parameter-
izing meshes have been proposed, targeting multiple
parameter domains and focusing on different param-
eterization properties such as minimizing distortion
and guaranteeing global bijectivity. Most methods
developed so far target a planar domain and thus re-
quire cutting the mesh into disk-like charts to avoid
excessive distortions and to make the topology of the
mesh compatible to that of the domain (see (Floater
and Hormann, 2005; Sheffer et al., 2006) for a recent
survey). Parameterizations onto more complex do-
mains such as triangle or quadrilateral meshes avoid
cutting the mesh and thus provide seamless param-
eterizations. A popular base domain are simplicial
complexes obtained e.g. by just simplifying the orig-
inal triangle mesh (see e.g. (Lee et al., 1998; Guskov
et al., 2000; Praun et al., 2001; Purnomo et al., 2004)),
allowing each vertex of the original mesh to be rep-
resented with barycentric coordinates inside a ver-
tex, edge, or face of the base domain. Some re-
cent approaches target quadrilateral instead of trian-
gle meshes (Tarini et al., 2004; Dong et al., 2006).
Polycube maps (Tarini et al., 2004) use a polycube
(set of axis-aligned unit cubes attached face to face)
as parameter domain. Each vertex of the mesh is
assigned a 3D texture position (a point on the sur-
face of the polycube) from which a simple mapping
is used to look up the texture information from the
2D texture domain. Construction of the parameteri-
zation involves (a) finding a proper polycube roughly
resembling the shape of the given mesh, (b) warping
the surface of the polycube so as to roughly align it
with the surface mesh, (c) projecting each mesh ver-
tex onto the warped polycube along its normal direc-
tion, (d) applying the inverse warp function to the pro-
jected vertices, and (e) optimizing the texture posi-
tions by an iterative process. Unfortunately, no auto-
matic procedure is given for steps (a) and (b), which
are done manually. Our approach also targets a do-
main formed by axis-aligned quadrilateral faces, but
differs from polycube maps in two key points: our
parameterization is implicit and thus does not require
storing texture coordinates, and it is easily invertible
and thus it is amenable to raycasting rendering. Gu et
al. (2002) propose to remesh an arbitrary surface onto
a completely regular structure called Geometry Im-
age which captures geometry as a simple 2D array of
quantized point coordinates. Other surface attributes
like normals and colors can be stored in similar 2D ar-
rays using the same implicit surface parametrization.
Geometry images are built cutting the mesh and pa-
rameterizing the resulting single chart onto a square.
Geometry images have been shown to have numerous
applications including remeshing, level-of-detail ren-
dering and compression. Our representation has simi-
lar uses than geometry images, although construction
is much simpler and requires encoding only distance
values instead of full vertex coordinates. Solid tex-
tures (Perlin, 1985; Peachey, 1985) avoid the param-
eterization problem by defining the texture inside a
volume enclosing the object and using directly the 3D
position of surface points as texture coordinates. Oc-
tree textures (Benson and Davis, 2002) strive to re-
duce space overhead through an adaptive subdivision
of the volume enclosing the object. Although octree
textures are amenable for GPU decoding (Lefebvre
et al., 2005), a subdivision up to texel level (each
octree leaf representing a single RGB value) causes
many unused entries in its nodes, thus limiting the
maximum achievable resolution.
Space overhead grows even further when using N
3
-
GRAPP 2010 - International Conference on Computer Graphics Theory and Applications
182