for encoding a triangle mesh is spent on the actual
shape, and how much is spent on capturing the partic-
ular tessellation, is generally not known, and attempt-
ing to fruitfully exploit the shape reference in order
to reduce the data rate with respect to no-reference
encoder turns out to be a surprisingly difficult task.
We present an algorithm based on traversing the
input mesh and predicting vertex positions one by
one. In order to make the prediction, we use the ref-
erence mesh. Our algorithm works with projections
of predicted and encoded vertices onto the surface of
the reference mesh. The difference between the pre-
diction and the actual position (also known as correc-
tion) is encoded intrinsically, restricting the possible
locations to the 2D reference surface and, most im-
portantly, using only two coordinates. Finally, rather
than using a rectangular grid in order to quantize the
coordinates, we use a hexagonal grid that has better
properties in terms of quantization error.
The rest of the paper is structured as follows:
Section 3 describes the overall process of encoding
a mesh, including requirements imposed on the input
shapes and steps taken to preprocess the data. Sec-
tion 4 then in detail describes three relatively indepen-
dent modules used for the assembly of the encoding
algorithm. Section 5 is devoted to the evaluation of
the performance of the proposed method and its com-
parison with an alternative static mesh encoder.
2 RELATED WORK
Compression of polygonal meshes, and of triangle
meshes in particular, is a field that has been actively
studied for several decades. The problem can be fur-
ther split to compression of connectivity, which is al-
ways understood as lossless, and compression of ge-
ometry (vertex positions), where mostly lossy algo-
rithms are employed, sacrificing reconstruction preci-
sion in order to achieve a more efficient compression.
For connectivity compression, it is known that as-
suming that every possible triangulation is equally
probable, at least 3.245 bits per vertex (bpv) are
needed in the limit for genus 0 triangle meshes (Tutte,
1962). A guarantee of 4 bpv is provided by the Edge-
Breaker algorithm (Rossignac, 1999), which can be
further improved by employing a more efficient en-
tropy coding. Further improvement is achieved by
valence based encoders (Alliez and Desbrun, 2001),
assuming that regular connectivities with vertex va-
lences close to 6 are more probable than others, reach-
ing data rates of 1-2 bpv for common datasets.
For geometry compression, the most common ap-
proach that complements the EdgeBreaker connectiv-
ity coder well is the parallelogram prediction (Touma
and Gotsman, 1998). Whenever a new vertex is en-
countered during the EdgeBreaker traversal, its po-
sition is predicted by forming a parallelogram from
a known neighbouring triangle. Next, rather than en-
coding the quantized coordinate, only a correction
vector which represents the difference between the ac-
tual and predicted position is stored reaching a lower
entropy and thus a lower bitrate.
This approach has been further improved by en-
coding the geometry in a separate pass, when the full
connectivity is known to both the encoder and the de-
coder. This allows adjusting the shape of the paral-
lelogram stencil according to the degrees of vertices
involved in the prediction (V
´
a
ˇ
sa and Brunnett, 2013).
Other approaches to geometry encoding have been
proposed as well, building on concepts such as ex-
pressing the geometry in delta coordinates (Sorkine
et al., 2003), known as high-pass coding (HPC) or ex-
pressing the shape in the frequency domain (Valette
and Prost, 2004). These often lead to different char-
acter of introduced distortion, targeting at percep-
tual quality metrics (Corsini et al., 2013). Recently,
a modification of the HPC has been proposed, which
allows achieving competitive results in terms of both
traditional error metrics, such as mean squared error
or Hausdorff distance, as well as perceptual metrics
(V
´
a
ˇ
sa and Dvo
ˇ
r
´
ak, 2018).
Finally, a range of algorithms has been proposed
aiming at various particular desirable properties of
mesh transmission, such as the possibility of partial
decoding (Hoppe, 1996), encoding of mesh sequences
of shared connectivity (Chen et al., 2018) or joint en-
coding of meshes with color or texture information
(Caillaud et al., 2016). Our paper fits into this last
category, focusing on a special case scenario when
a reference mesh is available.
The proposed compression procedure builds sub-
stantially on the concept of traversal based encod-
ing used by the EdgeBreaker algorithm (Rossignac,
1999). We give a short overview of the algorithm
in order to provide reference for the later exposition.
The EdgeBreaker algorithm starts with a single tri-
angle, which is selected by the encoder and assumed
at the decoder. Then the main loop follows, where in
each iteration, the processed part of the mesh (a single
triangle at the beginning, a larger subset of triangles
in later stages) is expanded by one triangle. The tri-
angle is attached to an implicitly selected border edge
of the processed part of the mesh, known as gate. It
therefore consists of two known vertices and a third,
possibly unknown tip vertex.
The data stream must indicate the status of the tip
vertex. If it is a new vertex, not yet known to the
Geometry Compression of Triangle Meshes using a Reference Shape
269