pared if they fit a given, known CAD model. N
¨
uchter
et al. (N
¨
uchter and Hertzberg, 2008) introduce a 6D
SLAM approach with semantic object recognition. In
this approach, the objects (like walls, doors, and ceil-
ings) are recognised from composed point clouds. To
classify other objects like robots or humans, trained
classifiers are used. Another approach for semantic
labelling was proposed in (Rusu et al., 2009). The au-
thors of this work use model-based object recognition
and try to recognise household objects in a kitchen en-
vironment, like furnitures and stoves. To infer about
these objects, the furniture features like knobs and
buttons are extracted beforehand. In (Aydemir et al.,
2010), semantic knowledge is used to search for spe-
cific objects. The authors try to find a potential place
in which the object could be found using a reason-
ing module. The authors in (Galindo et al., 2008) and
(Galindo et al., 2005) describe the use of semantic
maps for task planning and spatial reasoning. They
use marker identification to perform semantic inter-
pretation of entities and to bridge the gap between the
semantic and spatial domains.
3 APPEARANCE BASED
OBJECTS
PRE-IDENTIFICATION
In human living environments and especially in the
domestic one, many regularities with regards to the
objects’ occurrences can be found. For example,
some objects like furniture have defined heights and
are larger than some other objects. Other objects like
flat screen or keyboard have approximately the same
width but they have different depth. In general, the
objects could be distinguished from each other based
on their different spatial features. We make use of
those premises to do the object pre-classification step.
The next very important point is that most of the ob-
jects in the domestic environment are approximately
planar surfaces. Therefore we try to segment these
planes and extract the spatial features from them. We
call this step “pre-classification”, because in this stage
of our algorithm the objects are classified only by
their spatial features, without taking into account their
spatial relations to each other. As a result of this
step we obtain a probability distribution about object
classes given the measured values.
3.1 Point Cloud Pre-processing
The object recognition approach starts with the seg-
mentation of planes from the raw 3D point cloud data.
The data is taken with a tilting LIDAR (Light Detec-
tion and Ranging) laser system. For the segmenta-
tion, an optimised region growing algorithm is used.
This algorithm based on the approach in (Vaskevicius
et al., 2007) and was already mentioned in (Eich et al.,
2010). We extended this algorithm to deal with an un-
organized point clouds, like in the case of data from
our tilting system. In such point cloud data, the points
are not available in memory as a grid and their near-
est neighbours cannot be accessed directly. Because
of that, the complexity of the algorithm increases by
the nearest neighbours search. Therefore, we made
some optimisation steps, which make the algorithm
much faster then the original one (Vaskevicius et al.,
2007). We do not describe the algorithm in detail,
because it was already mentioned in our other work
(Eich et al., 2010). The algorithm segments the in-
put 3D point cloud into planes, which can be used for
the future processing step. The region growing needs
as an input different starting parameters, whose val-
ues determine the result of the segmentation. These
parameters are: the maximum distance between two
points, the maximum distance between a point and
plane, the maximum mean squared error of the plane,
the minimum number of points, which a plane needs
to have, and the maximum number of nearest neigh-
bours of a single point. The algorithm ends when all
points have been examined and results in a set of ex-
tracted planes. Fig. 1 shows the result of the segmen-
tation after applying the listed parameters.
3.2 Extraction of Spatial Object
Attributes
The extraction of spatial features of the objects starts
once the planes are segmented. In this step the spatial
features of each plane, like the size of a plane A ∈ R,
the rectangularity of the plane R ∈ [0, 1], its length and
width E ∈ R
2
, its orientation O ∈ H, and its center of
mass P ∈ R
3
are extracted and stored in a so called
feature vector Φ = (A, E, R, O, P). For better identi-
fication of these features, the found regions are first
projected into 2D space. This is done by applying
the reverse rotation for pitch and yaw to the original
translated plane. In the end, the normal vector of the
plane is parallel to the global z-axis. We assume that
through our region growing algorithm this object ap-
proximates a planar surface. Since we already rotated
and translated the surface into the xy-plane we can
simply set all remaining z-values to zero. By doing
this we project the approximately planar surface to a
planar surface. Afterwords the calculations of their
hulls take place. For this, an alpha shape method from
the computational geometry algorithm library CGAL
GRAPP2014-InternationalConferenceonComputerGraphicsTheoryandApplications
514