Growing Surface Structures
Hendrik Annuth and Christian-A. Bohn
Computer Graphics & Virtual Reality, Wedel University of Applied Sciences, Feldstr. 143, Wedel, Germany
Keywords:
Unsupervised Learning, Competitive Learning, Growing Cell Structures, Surface Reconstruction, Surface
Fitting.
Abstract:
Strictly iterative approaches derived from unsupervised artificial neural network (ANN) methods have been
surprisingly efficient for the application of surface reconstruction from scattered 3D points. This comes from
the facts, that on the one hand, ANN are able to robustly cluster samples of arbitrary dimension, size, and
complexity, and on the second hand, ANN algorithms can easily be adjusted to specific applications by in-
venting simple local learning rules without loosing the robustness and convergence behavior of the basic ANN
approach.
In this work, we break up the idea of having just an “adjustment” of the basic unsupervised ANN algorithm
but intrude on the central learning scheme and explicitly use learned topology within the training process. We
demonstrate the performance of the novel concept in the area of surface reconstruction.
1 INTRODUCTION
Due to the rapid development in 3D scanning technol-
ogy, real world objects can be scanned faster, more
accurately and at a higher resolution. This allows
creating high quality virtual representations of these
objects that can be utilized for many different pur-
poses in digital data processing. In archaeology and
crime scene investigation a site can be analyzed in-
dependently from its location and time. Architecture
and plant manufacturing involve accurate construc-
tion planning and precise measuring, which can be ef-
ficiently done on a digital model of a construction site.
In films reconstructed surfaces are often used as a sup-
port structure for manually created models. Computer
games use 3D scanning technology to incorporate real
world objects and environments into their otherwise
virtual worlds. Reverse Engineering, the most impor-
tant field of reconstruction, is a process where exit-
ing products or physical prototypes are used as a tem-
plate to implement a virtual 3D model for production
and quality control purposes. Mobile robots can profit
from surface models as maps for path planning, local-
ization, scene interpretation and grasping.
Laser scanning devices take samples of present
surfaces as three dimensional data points. These
points accumulate as an unorganized cloud of points.
Surface reconstruction is the process which creates a
virtual model of the surfaces from which these points
Figure 1: A photography of Michealangelo’s David (left), a
point cloud of the David statue (middle), and a surface fitted
into that point cloud (right).
originate (see Fig. 1). Such a point set typically in-
cludes noise, outliers, non-uniform sample densities,
and holes. Since the combination of these problems
is often inherently ambiguous, a vast number of dif-
ferent reconstruction approaches, and pre- as well as
post-processing methods exist.
Due to the remaining challenges in the field of re-
construction there is a strong tendency to focus on
ANN based solutions since they are strong with in-
complete and noisy data while being flexibly adapt-
able. Thus, there is hope that by a more intuitive, “ad-
hoc” manner, ANN training can be modified to match
the problems under consideration without the need of
349
Annuth H. and Bohn C..
Growing Surface Structures.
DOI: 10.5220/0004529203490359
In Proceedings of the 5th International Joint Conference on Computational Intelligence (NCTA-2013), pages 349-359
ISBN: 978-989-8565-77-8
Copyright
c
2013 SCITEPRESS (Science and Technology Publications, Lda.)
a deterministic mathematical model.
2 PREVIOUS WORK
2.1 Classical Surface Reconstruction
Many surface reconstruction approaches have been
suggested over the last decades. Range image meth-
ods can achieve very high resolutions and accu-
racy, while necessitating a very controlled scanning
setup with a limited sensing area (Curless and Levoy,
1996). Region growing approaches (Gopi and Krish-
nan, 2002; Bernardini et al., 1999) extended an ini-
tial surface incrementally at its boundaries. Some
methods reduce a 3D Delaunay triangulation of the
samples to a final surface (Edelsbrunner and M
¨
ucke,
1992), some derive it from the Voronoi diagram of the
points (Mederos et al., 2005; Amenta et al., 1998).
Combined concepts use region-growing approaches
for the triangulation and an additional global graph
like a 3D Delaunay triangulation as a guidance (Kuo
and Yau, 2005) or a medial scaffold (MS) (Chang
et al., 2009). Balloon models construct a volumet-
ric object surface by the “inflation” of a small sur-
face, as if it would be a balloon, inside a point cloud
(Sharf et al., 2006). Another huge class of reconstruc-
tion methods demands points that are augmented by
its normals to define an incomplete distance function.
This function is completed and the subspace in R
3
for
which it returns zero the zero-level-set is the
surface. The function can be composed of a multitude
of linear functions (Hoppe et al., 1992), quadratic
functions (Kazhdan et al., 2006; Ohtake et al., 2003)
or radial base functions (Carr et al., 2001). Model
based reconstruction approaches compose a surface
of a multitude of predefined models or components,
which are recognized and fitted into the point cloud
(Schnabel et al., 2009; Gal et al., 2007). Warping
algorithms approximate the surface by deforming an
initial surface to match the given points (Yu, 1999;
Baader and Hirzinger, 1993).
2.2 Artificial Neural Network based
Reconstruction
Many neural computation techniques have been ap-
plied to the problem of surface reconstruction and are
based on unsupervised learning concepts. Algorithms
such as the k-means clustering approach (MacQueen,
1967) use reference vectors to accomplish classifi-
cation and clustering tasks on huge and challenging
data sets (“hard competitive learning”). Kohonen pre-
sented the Self-Organizing-Map (SOM) (Kohonen,
1982) — additionally reference vectors are connected
adding a topology (“soft competitive learning”) which
enables the construction of a surface over the sample
set. Kohonen’s approach has the disadvantage of a
fixed resolution, which strongly relates the results to
the initial setting and size of the network. Fritzke pre-
sented the Growing-Cells-Structure (GCS) approach
(Fritzke, 1993) where the network grows over time
by dynamically adding reference vectors. The grow-
ing process can be determined by the approximation
error toward a likelihood distribution or a quantiza-
tion error, both of which are measured in relation to
the reference vectors. Based on this algorithm many
convincing surface reconstruction sometimes re-
ferred to by the term “refinement strategies” — meth-
ods have been presented (Annuth and Bohn, 2012;
Ivrissimtzis et al., 2003a; Ivrissimtzis et al., 2003b;
Vrady et al., 1999). The main disadvantage of the
methods above is the fact that 2D subspaces or sur-
faces are approximated by point distributions instead
of surface models. This becomes most apparent when
modeling flat surface areas where the granularity of
the ANN surface depends on the distribution of sam-
ples and not on the complexity of the underlying sur-
face.
In this work we present an approach which lifts this
handicap. Basic ANN learning is changed to a sur-
face oriented learning saving the advantages of neu-
ral networks but concurrently implementing a reason-
able “surface learning”. The difference to former ap-
proaches which modify ANNs by adding additional
constraints to the learning rules our approach intro-
duces an actual novel learning scheme.
3 SURFACE RECONSTRUCTION
Surface reconstruction creates a 2D subspace S in a
3D space R
3
that represents a real world physical sur-
face S
phy
. The information given about S
phy
is a finite
collection of surface samples P = {p
1
... p
n
} R
3
.
If closest neighbors in P always indicate a connec-
tion on S
phy
, surface neighborhood relations can be
investigated by accessing P in a 3D search pattern.
Real world scenarios however, involving noise, non-
uniform sample densities and incompletely sampled
areas. This makes a 3D search unreliable (see Fig.
2), which is the basic problem to overcome in a recon-
struction approach. Note that many of the following
illustrations are in 2D and are therefore curve recon-
structions, but to avoid confusions by swapping termi-
nology, we proceed in using the terms of 3D surface
reconstruction.
IJCCI2013-InternationalJointConferenceonComputationalIntelligence
350
Figure 2: The topology of a surface has to be derived from
3D samples. But searching in 3D for sample neighbors
might produce misleading results. First, a highly sampled
surface, where the two surface neighbors to a certain sam-
ple can be easily found (left). Then the same surface with a
lower sampling, the search space for the two closest neigh-
bors has grown (middle). And at last, a low sampling where
the two closest neighbors are not the correct topological (on
surface) neighbors (right).
Figure 3: Different successive surface stages in a GCS sur-
face reconstruction.
4 GROWING CELL
STRUCTURES
A GCS network is composed of simplices of an ini-
tially chosen dimension. In case of surface recon-
struction, a 2D surface S built of triangles as sim-
plices. The initial surface is a very simple network
of triangles such as a tetrahedron. It is positioned
roughly at the center of P . Since the GCS algorithm is
inspired by growing organic tissue, the reference vec-
tors are termed cells. GCS use an iterative refinement
process to fit the current surface into the point data P
(see Fig. 3). The refinement process randomly se-
lects a sample of P and deforms the current surface in
order to progressively minimize its distance. This ba-
sic step is repeated and in each iteration the local ap-
proximation error is measured. These errors are used
to determine surface areas which need to be refined
by local subdivision processes. Subdivision and fur-
ther iterations lead to a better match of the surface
to the sample distribution and the process is stopped
when the chosen average approximation error reaches
a threshold or a certain number of reference vectors
is reached. In the following section we analyze the
algorithm in detail for the application of surface re-
construction and show different kinds of handling the
error such as likelihood distribution, error minimiza-
tion and topology preservation which each lead to dif-
ferent results. Since the algorithm represents S as a
triangular mesh, we will use the term vertex instead
of reference vector, since it is more common in this
context.
Algorithm 1 : An overview of the GCS algorithm.
Conditions 1 and 2 can be defined as simple counters.
1: Given a point cloud P = {p
1
... p
n
} and an ini-
tial surface S in form of a tetrahedron repre-
sented as an interconnected network of vertices
S = {v
1
...v
n
}.
2: repeat
3: repeat
4: repeat
5: Select random sample p
x
of P and search
the winning vertex v
x
with smallest Eu-
clidian distance to p
x
.
6: Move v
x
towards p
x
.
7: Move all direct neighbors of v
x
with a
lesser factor towards p
x
.
8: Adapt the approximation error of v
x
.
9: until condition 1 holds.
10: Add new vertex in the area of highest ap-
proximation error through a vertex split op-
eration.
11: until condition 2 holds.
12: Search the least winning vertex in the network
and delete it by an edge collapse operation.
13: until accuracy exceeds a certain threshold.
4.1 Likelihood Distribution
The approximation error (algorithm 1 line 8) can be
altered toward a likelihood distribution or a quantiza-
tion error (see section 4.2). The vertices create a like-
lihood distribution if for every given vertex v S the
likelihood to be the closest neighbor to a randomly
chosen sample p P is equal. If we see vertices
being closest neighbors as the result of a probabil-
ity experiment this approximation resembles entropy
maximization. This means that the information car-
ried by any given sample is of the same importance.
These representations are especially important in pat-
tern recognition and statistical analysis.
To achieve this every vertex carries a signal counter.
To approximate the likelihood distribution (algorithm
line 8) these counters are simply incremented when
a vertex is closest to an input sample. If a new ver-
tex is added (algorithm line 10) the highest error term
refers to the space where most samples share the same
vertex.
Since older signals tend to be less representative, all
signal counters are decreased at every iteration cycle
by a certain factor (algorithm line 8). By using a like-
lihood distribution signal counters can also be used
GrowingSurfaceStructures
351
to determine misplaced vertices in spaces that contain
few or even no samples, since their signal counters
are very low due to constant decreasing. This concept
has therefore been used in most implementations for
surface reconstruction.
These algorithms however determine the area for
which the likelihood of the vertices is highest and not
the surface. If a flat surface is approximated, the al-
gorithm will create lots of vertices in relation to the
amount of samples, although the area could be accu-
rately approximated with a few triangles only.
4.2 Distance Minimization
When the approximation error (algorithm line 8) is
changed to account for a quantization error, vertices
are placed exposing the smallest Euclidian distance
to the samples in P . If the samples P are equally
distributed the goals of a likelihood distribution com-
pared to an error minimization are nearly the same.
If however some regions are represented by a denser
sampling than others, these regions will be repre-
sented by less vertices in the error minimization sce-
nario, since the error which is measured as the Euclid-
ian distance can be lowered more significantly in re-
gions where samples lay farther apart, hence vertices
are more likely to be added there. This approximation
is typically used for vector quantization in data com-
pression. To implement this behavior every vertex
carries an error value which is increased (algorithm
line 8) by the distance or the squared distance between
the winning vertex and the given sample. The high-
est approximation error refers to the space where the
samples lay farthest away from a vertex, thus a new
vertex will be added there (algorithm line 10). In con-
trast to a likelihood value, removing a vertex with low
distance errors would make no sense, since these ver-
tices indicate that they are well placed. However in
case the created topology matters, as in surface re-
construction, it is reasonable to remove such vertices
for memory efficiency reasons, since they might be
redundant geometry wise.
The basic problem is the difference between the ap-
proximation of the right topology and achieving a
lowest distance error. We will discuss this problem in
more detail in section section 4.3. When this approxi-
mation error is used, the deletion process of misplaced
vertices need to be handled separately. Despite of this
disadvantage minimizing the distance error might be
more convenient for surface reconstruction. But this
is not the case if the approximation minimizes the dis-
tance to the vertices instead of the surface (see Fig.
4), since a surface approximation aims to fit S as close
a possible to P .
Figure 4: Samples and an approximated surface (top), dis-
tance between vertices and samples (middle), distance be-
tween surface and samples (bottom). In case of surface ap-
proximation the distance to the surface is obviously more
worthy.
Many implementations have tackled this prob-
lem indirectly. (Annuth and Bohn, 2012) presents a
roughness adaptation where the average surface cur-
vature is compared to the one of a winning vertex and
curved areas lead to higher signals leading to more
subdivisions in such areas. In (Jeong et al., 2003)
vertices additionally have normals and the algorithm
counts how much these normals are moved to increase
subdivisions in such areas.
These changes lead to an implicit representation of
the approximation error within the algorithm, since
curved surface regions need more subdivisions to be
correctly approximated. But the surface approxima-
tion error itself is not explicitly represented.
4.3 Topology Preservation
The SOM (Kohonen, 1982) introduced an unsuper-
vised learning concept with an additional topology.
In the given network S vertices are not allowed to
move independently. When a vertex is moved to-
ward a sample its neighbors are also moved to de-
crease the created surface tension (algorithm line 7).
This principal adds elasticity to the network the
behavior of a continuous surface is manifested implic-
itly by creating dependencies between the vertices. A
topology can increase the performance in placing ref-
IJCCI2013-InternationalJointConferenceonComputationalIntelligence
352
Figure 5: Two surfaces with the same vertices, the same
samples, and with the same approximation error, that ex-
pose an undesired (left) and a desired (right) solution. A tri-
angle placed in an empty space (top) and an incorrect dent
in the surface (bottom).
erence vectors since the dependencies between them
make their movements more stable and thereby make
smoother distributions more likely. But the created
topology itself can be used in many different ways as
well. Data of a high dimensional input space can be
mapped into a space of lower dimension and can then
be visualized or analyzed with less computational ef-
fort (dimensionality reduction). The topology can
also be used for regression analysis where P is known
to originate from an unknown continuous function to
be reconstructed from the data (function approxima-
tion). The SOM uses a static topology, which usually
resembles a square shaped grid. The standard GCS
algorithm also uses a static surface topology while
the connectivity of the network can change (note that
the network connectivity is often also referred to as
topology, perceiving the network as a graph). This
means a network area can be increased in resolution
and thereby gather new vertices and connections, but
the surface topology of a sphere, inherited from the
initial tetrahedron shape, cannot be changed.
So functions that have a different topology can not
be correctly resembled. GCS however has better sur-
face approximation capabilities than the SOM, since
it builds newly created surfaces by refining a former
version of that surface. This adapts the vertex resolu-
tions in different surface areas toward the target func-
tion and gives a surface a certain inertia when being
modified, which avoids local failures if a surface is
fitted into a challenging point constellation.
If the information of a sample placed in a 3D space
is processed by the algorithm, this information is al-
ways set considering a pre-existing current surface S .
This is the strategy of the GCS algorithm to over-
Figure 6: Samples that originate from a curved surface
(top,left), a fitting approximation of this surface and a mag-
nification that shows the vertex-sample distance and the
surface-sample distance (top,right), the vertex Voronoi re-
gions which assign a sample at the wrong surface (bot-
tom,left), the surface Voronoi regions which assign the same
sample correctly (bottom,right).
come the 3D search problem (see section 3). But still
the GCS can get stuck in local minima and the initial
topology can mismatch the target surface.
In the standard algorithm a destructive method is pre-
sented which uses the average edge length as an in-
dicator to cut out triangles (Fritzke, 1993). In (Ivris-
simtzis et al., 2003a) triangles which are larger than
the average size are cut out, boundaries that fall be-
low a certain Hausdorff distance are joint, and in (An-
nuth and Bohn, 2012) high valences are used as an
indicator to cut the surface and low distances in com-
parison to edge length to join boundaries. With these
changes complex topologies can be created. The main
problem of all presented GCS based algorithms con-
cerning topology issues is the missing representation
of the actual surface within the adaptation process,
which makes many insufficient approximation states
simply not measurable (see Fig. 5).
Even so the given 3D information of samples is set in
relation to the existing 2D surface, the surface is still
represented as a collection of Voronoi regions of the
vertices, since vertex-sample and not surface-sample
distances are considered. This concept implicitly in-
cludes the assumption that the Voronoi regions of two
connected vertices will not be interrupted within their
attached surface. But for close or complex shaped sur-
faces, this is not the case (see Fig. 6). Here the actual
representation of S becomes apparent being a perme-
able space of independent Voronoi volumes.
Topology preservation and error minimization are two
different things. Error minimization tries to create the
smallest possible distance to the samples in P . Topol-
ogy preservation however is concerned with reaching
GrowingSurfaceStructures
353
Figure 7: A surface and samples (left), an approximation of
that surface and the distance error between the surface and
the samples (middle), another approximation of that surface
with a topological error (right).
a topology with S that lies as close as possible to the
topology of S
phy
(see Fig. 7).
In order to be topologically correct every point on one
surface need to have a unique equivalent on the other
surface and vice versa, while neighbor relations are
preserved, meaning the shortest on-surface path be-
tween any two given points projected onto S should
always correspond to the one on S
phy
.
5 APPROACH
The GCS algorithm has proven to be a high quality
surface reconstruction tool. However, in our anal-
ysis of the algorithm we saw that topology is only
created implicitly and only accounted for through the
additional adjustment of neighboring vertices. In the
following section we will present our changes to the
general approximation concept of the basic algorithm
and then the improvements that can be made based on
these changes.
5.1 Topology focused Approximation
The basic algorithm concept focuses on placing ver-
tices in positions likely to decrease the chosen ap-
proximation error. To put the actually created surface
topology into focus, the approximation error needs to
be set in relation to the surface-sample distance. The
most important change is to search for the closest sur-
face element (algorithm line 5) instead of the clos-
est vertex. The adaptation process (algorithm line 6
and 7) can now also be set in relation to a sample be-
ing closest to a triangle or being closest to an edge,
which gives rise to more different local surface mod-
ifications (see section 5.2.2). In the basic GCS im-
plementation the signal counter or error value is car-
ried by the vertices. The surface structure element
that most distance errors are measured towards and
that is also the building block of the discretization of
S
phy
is the triangle and is therefore the structure that
carries the local approximation error values in our im-
plementation. The most sensible place for the error
value of any topology focused function approxima-
tion is always the simplex of highest dimension in the
GCS algorithm. The distance between a sample and
its closest structure represents the actual distance er-
ror to the approximated surface and gives this approx-
imation error way more validity (algorithm line 8).
This allows for better judgments about a current lo-
cal approximation state and the choice of location for
subdivision (algorithm line 10). The new approxima-
tion error also allows and demands to distinguish be-
tween topology changing deletions to correct topolog-
ically misplaced surface structures and non-topology
changing deletions that remove geometrically redun-
dant structures from the surface (algorithm line 12).
Topology changing deletions are realized by adding
an “age” to every triangle and cutting them out when
they reach a certain age.
5.2 Adaptations of the Algorithm
Behavior
With the changes described in section 5.1 additional
and more accurate information about the current ap-
proximation state is available within the GCS process.
This information can be used to create a better ap-
proximation result in accuracy and topology. In the
following we will present our implementation details.
5.2.1 Search for Closest Element
Due to run time efficiency reasons we did not use an
actual triangle based spatial subdivision data struc-
ture, but still a vertex based octree. By searching for a
number num
v
of vertices and checking their surround-
ing triangles, we heuristically find the closest struc-
ture to a given sample. We used num
v
= 3 which fails
when the degree of curved and flat surface areas di-
verge too much and are close to each other, but we
consider this case to be rare (see Fig. 8). The new
search process has three possible outcomes: a vertex,
an edge, or a triangle (algorithm line 5).
5.2.2 Surface Movement
Instead of having only a vertex as a closest element,
we now can access additionally an edge and a trian-
gle in our implementation. We modeled three differ-
ent main movements (algorithm line 6). Since we
now know the distance of a given sample p to the
surface we can compare this distance d
p
to the av-
erage sample-to-surface distance d
p
. When d
p
is only
IJCCI2013-InternationalJointConferenceonComputationalIntelligence
354
Figure 8: The presented approach searches for the clos-
est surface structure, such as a vertex, an edge or a trian-
gle. However this process is emulated on a search inves-
tigating the surrounding elements of a number of vertices
num
v
, instead of having a search tree actually comprising
edges and triangles. The figure shows a sampled curved
and flat surface close by (top, left), next to it its approxi-
mated surface (top,right). The search heuristic works cor-
rectly for num
v
= 3, where three vertices are investigated
(bottom,left). The search heuristic fails for num
v
= 2 (bot-
tom,right), where the vertex connected to the closest trian-
gle is not investigated.
a fraction lim
skip
of d
p
we entirely discard the adap-
tation since the sample already lies more or less close
to the surface. When d
p
is about the degree lim
single
lower than d
p
we only move the closest vertex, since
we consider the surface to be generally correct, but
the “joints” of the divisions can be optimized. We
use lim
skip
= 0.9 and lim
single
= 1.2. Note that dis-
tance error distribution within the process is not Gaus-
sian, thus we cannot describe the process in terms of
standard deviations. For higher distances we move
all vertices of the given structure towards the sam-
ple. Due to this, surface widening is less likely to
cause spikes and the surface is moved more unified,
which produces a smoother surface. The neighbor-
hood movement (algorithm line 7) is unchanged and
accomplished with the Laplace smoothing mecha-
nism (Taubin, 1995) for all first neighbors of the given
structure.
5.2.3 Distance Error
A sample can be closest to a vertex, an edge, or a
triangle. When it is closest to a triangle the age of
this triangle is set to zero and the distance error is
changed. In case of an edge this is done for both tri-
angles connected to that edge. In case of a vertex it
is done for all triangles connected to this vertex. If
Figure 9: A triangle split operation (left) and a triangle col-
lapse operation (right).
we would set the error value directly to the given dis-
tance, all previous distance errors would be lost, if
we, on the other hand accumulate all distance errors,
old distance errors would totally determine the refine-
ment process, since their distances where huge and it
would take many subdivisions to decrease them. We
could constantly decrease all error values as described
in section 4.1 but this would strongly decrease the
accuracy of the local distance errors since it implies
that the surface constantly improves everywhere with
a constant rate. We wanted the half-life λ of a distance
error to be 9 winning operation (see equation 1).
(
k1
k
)
λ
= 0.5
k =
1
λ
0.51
err
new
=
d
p
+err
old
(k1)
k
(1)
5.2.4 Refinement
Instead of the vertex we search the triangle with the
highest approximation error. Subdivision is done by
splitting the triangle’s surface from one of its three
vertices to the opposite edge and then also split the
other triangle in the mesh with this edge. The edge
with the additional triangle with the largest error term
is taken. Four new triangles are added, four new
edges, and one new vertex (see Fig. 9). The error
value of a new triangle is the half of the error value of
its predecessor.
5.2.5 Deletion
The deletion process is one of the most important
changes in the algorithm. When using a sample-to-
surface distance error, the error values can only be
used to determine triangles that might be redundant
and do not improve the approximation. In order to
have a model representation that is as memory effi-
cient as possible, these triangles can be deleted by an
edge collapse operation of one of its three edges. The
GrowingSurfaceStructures
355
best candidate for the operation is the edge which is
surrounded by triangles with those normals that ex-
pose the least differences |n| to one another, since a
collapse of this edge will change the surface gradient
the least. This edge can be determined as the one with
the highest dot product of the normals of its two ver-
tices, since these normals are calculated based on their
surrounding triangles. It is reasonable to set a thresh-
old max
|n|
for |n|to avoid changes that decrease the
surface approximation quality. We choose max
|n|
to
be cos 10
which allows for a maximum angles of 10
between those normals. In addition to the surface er-
ror we need a triangle age a that indicates if a triangle
reached a maximum age max
a
. This should happen
when the triangle has been missed for a certain num-
ber of times η. Those triangles are considered to be
misplaced and topological wrong. A misplaced trian-
gle is detached from the rest of the network and then
deleted. Note that the deletion of a single triangle can
leave the surface in an undesired state, where for ex-
ample triangles are connected by single vertices only,
which need to be cleared. Since triangles are likely to
have different sizes and small triangles are less likely
to be winners, the real age a
real
of a triangle need to
be set in relation to its size, or in other words, small
triangles age slower. This can easily be achieved by
an additional age factor t
size
calculated as the size of
a triangle divided by the average triangle size. For
small triangles this value is very low, so their age is
strongly reduced. For every iteration the age of all tri-
angles is increased by a tiny factor µ that has a relation
to the overall number of triangles |T | (see equation
2). This is basically the reversed tumble tree (Annuth
and Bohn, 2010) principal. We use 1 for max
a
and
10 for η. This means, i.e., if one of four triangles of
the same size is not hit one time for forty iterations
it is considered to be misplaced. The deletion pro-
cess now explicitly distinguishes distance driven and
topologically driven deletion.
µ
(|T η)
= max
a
µ =
(|T η)
max
a
a
new
= a
old
·µ
a
real
= t
size
·a
(2)
5.2.6 Finalization
One of the assets of the GCS algorithm is the fact that
S is an approximation of S
phy
at any time during the
running loop — the algorithm can be stopped and re-
sumed at any given time. With the novel surface dis-
tance approximation error a potential stopping point
for the algorithm can be chosen more sensibly.
Figure 10: A progression series of the dragon model from
left to right with 2500, 5000 and 10000 triangles with the
standard algorithm (top) and with the new algorithm (bot-
tom). With the new algorithm the surface diverges faster
toward the final topology.
6 RESULTS
We accomplished different tests with the Stanford
Dragon model as a good example for a point cloud
that is relatively challenging by its shape and the sam-
ple distribution, the hand model exposes sharp fea-
tures, the Asian Dragon and the Thai Statue expose
a lot of curved areas, the Heating Pipes model in-
cludes some extremely noisy areas, non-uniform sam-
ple densities and open surface areas, the Happy Bud-
dha has regions of surfaces lying close together. From
the basic algorithm we used (Annuth and Bohn, 2012)
but deactivated roughness adaptation and sharp fea-
ture detection.
Theoretically, any surface would be correctly re-
constructed with the GCS algorithm, if infinite sam-
ples, sufficient memory and time were available. A
reasonable parameter to judge efficiency is the time to
reach a certain accuracy. In our experiments with the
new approach, for instance the topology of the dragon
model was approximated much faster, shown in Fig.
10.
Close surfaces can be handled correctly with our
presented method and a vast number of additional it-
erations to avoid permeating Voronoi regions is not
required any longer as we show in Fig. 11.
Although the standard algorithm is already quite ro-
bust when dealing with noise, we could show that
spikes and rough surface gradients could be greatly
reduced with our presented method. The moving of
entire substructures seems to have a smoothing effect
on the surface (see Fig. 12).
We compared the new and the old algorithm (see table
1). Generally the old algorithm creates a lower aver-
age point P to surface S distances, since it evenly dis-
tributes its subdivisions over S , whereas the presented
algorithm focuses its subdivisions on areas of high ap-
IJCCI2013-InternationalJointConferenceonComputationalIntelligence
356
Figure 11: Some thin areas of the Happy Buddha model,
reconstructed with 200K triangles with the old (top) and the
new (bottom) algorithm. The new algorithm is able to build
a correct topology in thin areas in an earlier algorithm stage.
Figure 12: Very noise section of the Heating Pipes model
(left); Pointy vertices or spikes on the surface of the stan-
dard algorithm (middle) and a smoother surface with our
approach (right).
proximation error rates. This is visible through a 25%
decrease of the mean squared error. Especially for
curved models such as the Asian Dragon and the Thai
Statue this effect is very salient.
Although the search process is more complex, the
extra time costs are nearly leveled by the discarded
operations which are for the Dragon model 43.3% of
discarded adaptations and a rate of 26.3% inside the
surface movements of vertices only with our setting
for lim
skip
and lim
single
.
When we used the square distance as the approxima-
tion error, the results for both the average distance er-
ror as well as the square distance error where worse,
than the results of the standard algorithm. This is rea-
sonable since most triangles are used up to model tiny
but steep curvature (see Fig. 13). In addition, tri-
angles tended to clump even for low error half-lifes
λ. For the Happy Buddha model this led too many
clump-like artefacts. We consider this setting gener-
ally impractical.
Test Hardware A Dell
R
Precision M6400 with
Intel
R
QX9300 (2.53GHz) processor with 8GB 1066
MHz DDR3 Dual Channel RAM.
Parameter Settings num
v
= 3; max
a
= 1;
η = 10; max
|n|
= cos 10
; λ = 9;
lim
skip
= 0.9; lim
single
= 1.2;
(See (Annuth and Bohn, 2012) for the parameter set-
tings of the basic algorithm).
Figure 13: Two magnifications of the dragon model recon-
structed with100K triangles with the square of the point to
surface distance as the approximation error d
2
p
(top) and
with just the distance error d
p
(bottom). In the latter case,
the triangle resolutions at the back and between tail and
body are very high and are even for curved areas overrepre-
sented.
7 CONCLUSIONS AND FUTURE
WORK
In this paper we focused on the behavior of the grow-
ing cell structures approach as a function approxima-
tion algorithm. We analyzed the GCS with the clas-
sical adaption algorithm for matching requirements
of surface reconstruction. Derived from these obser-
vations we presented our new GCS learning model
and proved theoretically and by examples that it out-
performs the classical GCS approach. The basic
idea of the presented approach is to incorporate the
constructed topology into the GCS learning scheme.
GCS creates ideal distribution matching, clustering,
or dimensionality reduction by an implicit represen-
tation of a topology. In our work, we introduced an
explicit topology to model the approximation behav-
ior according to it, while saving the valuable ANN ca-
pabilities mentioned above. As result, we got a new
ANN learning strategy, which showed several advan-
tages compared to classical models. We see this paper
as proof of our conceptual change of the GCS algo-
rithm, giving rise to many improvements for the algo-
rithm in future work.
ACKNOWLEDGEMENTS
The authors would like to thank the Stanford Univer-
sity Computer Graphics Laboratory and the Hamburg
Hafen City University for making their laser scanned
data available to us. We also would like to thank Kai
Burjack for his assistance in rendering.
GrowingSurfaceStructures
357
Table 1: Our results for different models. We expose the time for the reconstruction process (time), the average distance to P
(dist) times 10
4
and the square distance to P (dist
2
) times 10
7
. We also tested different values for num
v
. Note that models are
normalized by setting the cubic diagonal of their bounding box to one.
GCS
GCS error=d
p
GCS error=d
2
p
Model (# triangles)
time[s]
dist
dist
2
time[s]
dist
dist
2
time dist
dist
2
Hand (20K)
6
3.19
4.38
7
4.29
4.11
6 4.70 5.00
Dragon (100K)
61
2.29
3.42 65 2.55
2.97
66 3.05 4.34
Asian Dragon
(100K)
55 2.36 3.62 61
2.71
2.53 61 2.83 2.78
Thai Statue (200K)
146 3.21 15.4
147
3.00 4.02 148 4.05
7.17
Happy Buddha
(200K)
150 1.48
14.1
158
1.89
11.7
160 4.02 63.0
num
v
= 1
num
v
= 5
num
v
= 10
Model (# triangles)
time[s]
dist
dist
2
time[s]
dist
dist
2
time dist
dist
2
Dragon (100K)
62 2.35 3.01
71
2.55 3.05
121
2.57
2.95
REFERENCES
Amenta, N., Bern, M., and Kamvysselis, M. (1998). A
new voronoi-based surface reconstruction algorithm.
In Proceedings of the 25th annual conference on
Computer graphics and interactive techniques, SIG-
GRAPH ’98, pages 415–421, New York, NY, USA.
ACM.
Annuth, H. and Bohn, C. A. (2010). Tumble tree: reduc-
ing complexity of the growing cells approach. In Pro-
ceedings of the 20th international conference on Ar-
tificial neural networks: Part III, ICANN’10, pages
228–236, Berlin, Heidelberg. Springer-Verlag.
Annuth, H. and Bohn, C.-A. (2012). Smart growing cells:
Supervising unsupervised learning. In Madani, K.,
Dourado Correia, A., Rosa, A., and Filipe, J., editors,
Computational Intelligence, volume 399 of Studies in
Computational Intelligence, pages 405–420. Springer
Berlin / Heidelberg. 10.1007/978-3-642-27534-0 27.
Baader, A. and Hirzinger, G. (1993). Three-dimensional
surface reconstruction based on a self-organizing fea-
ture map. In In Proc. 6th Int. Conf. Advan. Robotics,
pages 273–278.
Bernardini, F., Mittleman, J., Rushmeier, H., Silva, C.,
Taubin, G., and Member, S. (1999). The ball-
pivoting algorithm for surface reconstruction. IEEE
Transactions on Visualization and Computer Graph-
ics, 5:349–359.
Carr, J. C., Beatson, R. K., Cherrie, J. B., Mitchell, T. J.,
Fright, W. R., McCallum, B. C., and Evans, T. R.
(2001). Reconstruction and representation of 3d ob-
jects with radial basis functions. In Proceedings of the
28th annual conference on Computer graphics and in-
teractive techniques, SIGGRAPH ’01, pages 67–76,
New York, NY, USA. ACM.
Chang, M.-C., Leymarie, F. F., and Kimia, B. B. (2009).
Surface reconstruction from point clouds by trans-
forming the medial scaffold. Comput. Vis. Image Un-
derst., 113(11):1130–1146.
Curless, B. and Levoy, M. (1996). A volumetric method for
building complex models from range images. In Pro-
ceedings of the 23rd annual conference on Computer
graphics and interactive techniques, SIGGRAPH ’96,
pages 303–312, New York, NY, USA. ACM.
Edelsbrunner, H. and M
¨
ucke, E. P. (1992). Three-
dimensional alpha shapes. In Volume Visualization,
pages 75–82.
Fritzke, B. (1993). Growing cell structures - a self-
organizing network for unsupervised and supervised
learning. Neural Networks, 7:1441–1460.
Gal, R., Shamir, A., Hassner, T., Pauly, M., and Cohen-
Or, D. (2007). Surface reconstruction using local
shape priors. In Proceedings of the fifth Eurographics
symposium on Geometry processing, SGP ’07, pages
253–262, Aire-la-Ville, Switzerland, Switzerland. Eu-
rographics Association.
Gopi, M. and Krishnan, S. (2002). A fast and effi-
cient projection-based approach for surface recon-
struction. In Proceedings of the 15th Brazilian Sym-
posium on Computer Graphics and Image Processing,
SIBGRAPI ’02, pages 179–186, Washington, DC,
USA. IEEE Computer Society.
Hoppe, H., DeRose, T., Duchamp, T., McDonald, J. A., and
Stuetzle, W. (1992). Surface reconstruction from un-
organized points. In Thomas, J. J., editor, SIGGRAPH,
pages 71–78. ACM.
Ivrissimtzis, I., Jeong, W.-K., and Seidel, H.-P. (2003a).
Neural meshes: Statistical learning methods in surface
reconstruction. Technical Report MPI-I-2003-4-007,
Max-Planck-Institut f”ur Informatik, Saarbr
¨
ucken.
Ivrissimtzis, I. P., Jeong, W.-K., and Seidel, H.-P. (2003b).
Using growing cell structures for surface reconstruc-
tion. In SMI ’03: Proceedings of the Shape Modeling
International 2003, page 78, Washington, DC, USA.
IEEE Computer Society.
IJCCI2013-InternationalJointConferenceonComputationalIntelligence
358
Jeong, W.-K., Ivrissimtzis, I., and Seidel, H.-P. (2003).
Neural meshes: Statistical learning based on normals.
Computer Graphics and Applications, Pacific Confer-
ence on, 0:404.
Kazhdan, M., Bolitho, M., and Hoppe, H. (2006). Poisson
surface reconstruction. In Proceedings of the fourth
Eurographics symposium on Geometry processing,
SGP ’06, pages 61–70, Aire-la-Ville, Switzerland,
Switzerland. Eurographics Association.
Kohonen, T. (1982). Self-Organized Formation of Topolog-
ically Correct Feature Maps. Biological Cybernetics,
43:59–69.
Kuo, C.-C. and Yau, H.-T. (2005). A delaunay-based
region-growing approach to surface reconstruction
from unorganized points. Comput. Aided Des.,
37(8):825–835.
MacQueen, J. B. (1967). Some methods for classification
and analysis of multivariate observations. pages 281
297.
Mederos, B., Amenta, N., Velho, L., and de Figueiredo,
L. H. (2005). Surface reconstruction from noisy point
clouds. In Proceedings of the third Eurographics sym-
posium on Geometry processing, SGP ’05, Aire-la-
Ville, Switzerland, Switzerland. Eurographics Asso-
ciation.
Ohtake, Y., Belyaev, A., Alexa, M., Turk, G., and Seidel,
H.-P. (2003). Multi-level partition of unity implicits.
In ACM SIGGRAPH 2003 Papers, SIGGRAPH ’03,
pages 463–470, New York, NY, USA. ACM.
Schnabel, R., Degener, P., and Klein, R. (2009). Completion
and reconstruction with primitive shapes. Computer
Graphics Forum (Proc. of Eurographics), 28(2):503–
512.
Sharf, A., Lewiner, T., Shamir, A., Kobbelt, L., and Cohen-
Or, D. (2006). Competing fronts for coarse-to-fine
surface reconstruction. In Eurographics 2006 (Com-
puter Graphics Forum), volume 25, pages 389–398,
Vienna. Eurographics.
Taubin, G. (1995). A signal processing approach to fair sur-
face design. In Proceedings of the 22nd annual con-
ference on Computer graphics and interactive tech-
niques, SIGGRAPH ’95, pages 351–358, New York,
NY, USA. ACM.
Vrady, L., Hoffmann, M., and Kovcs, E. (1999). Im-
proved free-form modelling of scattered data by dy-
namic neural networks. Journal for Geometry and
Graphics, 3:177–183.
Yu, Y. (1999). Surface reconstruction from unorganized
points using self-organizing neural networks. In
In IEEE Visualization 99, Conference Proceedings,
pages 61–64.
GrowingSurfaceStructures
359