sigils and sealings currently residing in Heidelberg,
Germany, systematically documents and publishes all
known ancient Aegean seals and sealings. The CMS
consists of an physical archive, published volumes,
and an open-access digital database
2
. The database
contains photos and tracings of seals and sealings
with associated manually added meta-data in line with
current Aegean glyptic research.
Within an interdisciplinary research project we are
gathering hundreds of sealings of a collection hold-
ing approx. 12.000. We make use of the physical
archive and the manufactured impressions of sealings
in plasticine, silicon, and gypsum. The impressions
of the sealings are their negative imprint. That way,
the motif and detail can be discerned more clearly.
The surface of the plasticine negatives, the sealing im-
pressions, are acquired with 800dpi resolution using a
structured-light 3D scanner. In this work we use the
word sealings and sigils interchangeably as our ap-
proach is applicable to any artifacts of similar shape.
3.1 Pre-processing
The resulting 3D data is processed in GigaMesh
3
by
computing the surface curvature of the impressions.
The computation proceeds using either Multi-Scale
Integral Invariants (MSII) (Mara and Kr
¨
omker, 2017;
Mara, 2016) or by Ambient Occlusion (AO) (Miller,
1994). While MSII provides better details for small-
scale surface features, for aligning sealing impression
we use AO which provides smoother and medium-
scale surface curvature.
The surface data augmented with surface curva-
ture is projected into a raster image of 400 ×600 pix-
els extents. We then apply local pixel histogram scal-
ing with a disk of 50 pixels, thus further emphasizing
medium-scale surface curvature. Finally, the images
are padded to provide the necessary space for subse-
quent deformation operations. The process from a 3D
sealing with texture data to a pre-processed sealing
raster image is shown in Figure 1.
4 CORRESPONDENCE
The underlaying assumption of our work is that
pairs of images under study, in our case impressions
of sealings, share visually similar and semantically
equivalent regions. If we then identify and align these
region pairs, the deformation that has been applied to
2
https://www.uni-heidelberg.de/fakultaeten/
philosophie/zaw/cms/databases/databasesfull.html
3
https://gigamesh.eu
one of the sealings will become apparent. This iden-
tification requires a discriminative distance metric be-
tween image patches. We evaluate four different dis-
tance metrics of increasing complexity.
In this work we define the left sealing image as the
sealing being deformed to match the right sealing im-
age. For reasons of brevity, in the following sections,
we only give definitions of the feature descriptors for
left sealing image
{D,Y,Z,V,N} f
i
and omit definitions for
the features of the right sealing image
{D,Y,Z,V,N}g
j
as
they are identical up to interchanged variables.
4.1 Direct Matching
In the baseline approach sub-regions are compared di-
rectly by pixel value using the Euclidean distance. For
the pair of sealing images under study I, J we gener-
ate two sets A, B of key-points a
i
, b
j
∈ R
2
arranged in
a grid with 60 × 60 grid points overlaid on the seal-
ing images. At each key-point we extract sub-regions
of the sealing images, the patches p
i
, q
j
∈ R
α×β
with
height α and width β, centered around the key-points.
By flattening these patches, we compute respective
feature vectors
D
f
i
,
D
g
j
∈ R
αβ
. Then the distance
between the image patches
D
d
i j
is given by the Eu-
clidean distance between the image patches.
D
d
i j
= k
D
f
j
−
D
g
j
k (1)
Since our image data depicts local surface curva-
ture, the direct comparison of pixel values represents
a direct comparison of curvature values of the original
surfaces.
4.2 DAISY Descriptor
The DAISY image descriptor (Tola et al., 2010)
is a reformulation of the SIFT (Lowe, 2004) and
GLOH (Mikolajczyk and Schmid, 2005) descriptors
efficiently computable for each pixel in an image.
This is achieved by computing multi-scale histograms
of oriented gradients only once per image region shar-
ing these among neighboring pixels. Similar to the
direct comparison image patches, we extract DAISY
descriptors
Y
f
i
,
Y
g
j
∈ R
δγ
with δ orientations and γ
rings, centered at the key-points a
i
, b
j
.
Y
d
i j
= k
Y
f
i
−
Y
g
j
k (2)
The distance between the feature descriptors is
computed using the Euclidean distance.
4.3 BOVW Descriptor
We aggregate locally bounded sets of DAISY fea-
tures into feature vectors to better capture and de-
Recovering and Visualizing Deformation in 3D Aegean Sealings
459