Computation of the φ-Descriptor in the Case of 2D Vector Objects
Jason Kemp, Tyler Laforet and Pascal Matsakis
School of Computer Science, University of Guelph, Stone Rd E, Guelph, Canada
Keywords:
Image Descriptors, Relative Position Descriptors, φ-Descriptor, Spatial Relationships, Vector Data.
Abstract:
The spatial relations between objects, a part of everyday speech, are capable of being described within an
image via a Relative Position Descriptor (RPD). The φ-descriptor, a recently introduced RPD, encapsulates
more spatial information than other popular descriptors. However, only algorithms for determining the φ-
descriptor of raster objects exist currently. In this paper, the first algorithm for the computation of the φ-
descriptor in the case of 2D vector objects is introduced. The approach used is based on the concept of Points
of Interest (which are points on the boundaries of the objects where elementary spatial relations change)
and dividing the objects into regions according to their corresponding relationships. The capabilities of the
algorithm have been tested and verified against an existing φ-descriptor algorithm for raster objects. The new
algorithm is intended to show the versatility of the φ-descriptor.
1 INTRODUCTION
A Relative Position Descriptor (RPD) provides a
quantitative description of the spatial position of two
image objects with respect to each other (Matsakis
et al., 2015). It can be used, for example, to give a
quantitative answer to the question: “Is object A to
the left of object B?”. The descriptor may indicate
whether A is exactly, not at all, or to some degree
to the left of B. There are many RPD algorithms de-
signed for raster objects, i.e., sets of pixels (2D case)
or voxels (3D case). There are significantly fewer
RPD algorithms designed for vector objects, i.e., sets
of vertices defining polygons (2D case) or polyhedra
(3D case).
The force histogram (Matsakis and Wendling,
1999) might be the best known RPD and has found
many applications. Algorithms exist to compute the
force histogram in the case of 2D raster objects, 2D
vector objects, 3D raster objects and 3D vector ob-
jects. In this paper, we focus on another, more re-
cent RPD: the φ-descriptor (Matsakis et al., 2015)
which encapsulates much more spatial relationship
information than the force histogram, and appears to
be much more powerful (Francis et al., 2018). Cur-
rently, there only exists an algorithm for calculating
the φ-descriptor of 2D raster objects, and there is no
algorithm for calculating the φ-descriptor of 2D vec-
tor objects.
Here, we introduce the first algorithm to calculate
the φ-descriptor of 2D vector objects. Background
information is provided in Section 2. The new algo-
rithm is presented in Section 3 and tested in Section
4. Conclusions and future work are in Section 5.
2 BACKGROUND
Spatial relations can be categorized as directional,
topological, and distance. Freeman (1975) was the
first to suggest that spatial relations could be used
to model relative position. He also suggested that a
fuzzy system be used to allow for these relations to be
given a degree of truth. Miyajima and Ralescu (1994)
proposed that an RPD could be created to encapsu-
late spatial relationship information. An RPD pro-
vides a quantitative description of the spatial relations
between two image objects. RPDs are visual descrip-
tors like color, texture and shape descriptors. Symbol
recognition (Santosh et al., 2012), robot navigation,
geographic information systems, linguistic scene de-
scription, and medical imaging are some areas that
benefit from RPDs (Naeem and Matsakis, 2015).
Existing work in RPDs is explored in Sections 2.1,
2.2, and 2.3, and methods to compare how similar two
φ-descriptors are is explored in Section 2.4.
60
Kemp, J., Laforet, T. and Matsakis, P.
Computation of the -Descriptor in the Case of 2D Vector Objects.
DOI: 10.5220/0008984500600068
In Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2020), pages 60-68
ISBN: 978-989-758-397-1; ISSN: 2184-4313
Copyright
c
2022 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved