during embedding evaluation and refer to orbit ma-
nipulation.
The engine computes embedding values in the ini-
tial object space because, this way, computations may
reuse values already present in this object, such as or-
bits, neighbor darts or initial embedding values. How-
ever, the engine must determine where computations
must be called, so it also manipulates the final object
to determine every orbit of any orbit type that em-
beds information. A similar problem appears when
an already-existent embedding value, associated with
a given orbit type, must be spread from an initial dart
to newly-created darts belonging to the same orbit.
The engine handles this issue by determining every
orbit (of a given type, the one associated to a given
embedding), that is, partitioning the set of darts into
equivalence classes of darts belonging to the same or-
bit, and distribute an embedding value to all the darts
constituting it.
All these issues lead us to classify operations with
respect to their topological properties. Currently, we
identify three classes: update operations that do not
induce topology changes but only imply modification
of embedding values, enrichment operations which
have only one node as left pattern and increase the
number of darts without deletion, and other opera-
tions. This article focuses on enrichment operations,
since they are used to make an object more com-
plex (subdivision, triangulation, extrusion, etc.), and
both update and enrichment operations can benefit
from the same optimization approach. On the con-
trary, considering the last class of operations, where
deletions might be required, consequent treatments to
satisfy consistency of topology and embedded values
must be used and heavily complicate parallelization.
Moreover, update and enrichment operations generate
less concurrent access issues contrary to other kinds
of rules and are more likely to benefit from paral-
lelism. We decided to let this third class of rules as
future work.
Now, let us consider the different steps of Jerboa
Engine (see section 3.3). In enrichment operations,
the left pattern is reduced to one node identified with
one orbit type that defines the zone where the enrich-
ment operation is applied.
The second step resorts to user code (to manage
embedding values), so a dedicated automatic code
analysis is required to parallelize this step. This is not
our purpose, so we let these steps sequential and use
them as barriers to synchronize other steps. The third
step, that is the one that interprets the right part of the
rule, offers several parallelization opportunities.
4.2 Parallelized Engine for Enrichment
Operations
At first, the engine checks if the rule given as input
is compatible with parallelism, that is, if it is an up-
date or an enrichment rule: (1) There is a unique node
in the left pattern of the rule, (2) this unique node
must have a full label orbit (no symbol bottom ’ ’ can
be used), (3) the left node is preserved on the right
pattern (then no deletion occurs). Our approach also
assumes that computation of embedding values are
concurrent-safe.
Note that, to ensure consistency constraints of en-
richment operations, the orbit associated with the left
node usually corresponds to a complete connected
component, in enrichment operations.
4.2.1 Parallelizing Topological Transformations
When the engine starts the application of an opera-
tion, it determines, first, all darts which represent the
unique node of the left pattern by walking through
the orbit shown in its label. With this list of darts
(called left darts), our engine computes an indexed
structure to organize topological transformations ef-
ficiently. Contrary to Jerboa orginal data structures,
such an indexed, local, structure is common and it al-
lows for the use of parallel patterns (McCool et al.,
2012). It appears as a matrix that gives, for all in-
dexed darts, the index of the darts that are 0-linked
with them in a first line, those that are 1-linked in a
second line, and so on for every link. This matrix is
first used to represent the initial structure, as can be
seen in Table 1. Its size is the number of left darts
and is called “left adjacency matrix”. It is filled in the
first step of the algorithm, during the gathering of all
left darts. A similar structure is also allocated to rep-
resent the transformed structure, and is called “right
adjacency matrix”. Since left darts are duplicated for
each node in the right pattern, the size of this right ma-
trix is the product of the number of left darts and the
number of nodes in the right pattern. These matrices
must ensure non-orientation constraints: If a dart d is
linked to a dart e for a given dimension, then it must
be ensured that e is linked to d for the same dimen-
sion. The following algorithms guarantee this prop-
erty. Note that, concerning update operations, left and
right matrices are the same.
Table 1: Left matrix of object in Figure 3a.
ID 0 1 2 3 4 5 6 7 8 9 10 11 12 13
α
0
1 0 11 10 5 4 7 6 9 8 3 2 13 12
α
1
13 2 1 4 3 6 5 8 7 10 9 12 11 0
α
2
0 1 3 2 4 5 6 7 8 9 11 10 12 13
GRAPP 2020 - 15th International Conference on Computer Graphics Theory and Applications
130