one another, we introduce a further instance that may
be inserted into the fact base representing the orien-
tation of the robot or any other object. One exam-
ple would be, if the orientation of the robot is north,
the direction would be represented by (Orientation N
Robot) whereas Orientation would be the artificially
added instance of the fact base. This allows not only
to fulfill the first but also the second subtask: If the
robot’s orientation is equal to the inverse orientation
of the target object, one can assume that the needed
orientation for the interaction of the robot with the
target object is achieved.
To fulfill the third subtask, the computation of dis-
tance between the robot and the target object, quan-
titative spatial relations, which are not provided by
RCC-8 or CDC, are needed. However, for using these
calculi a mapping from quantitative acquired posi-
tions to qualitative relations will always be done be-
forehand. Otherwise, some of the RCC-8 relations
can not be applied between objects. For clarifica-
tion, if two objects are near to one another, a com-
putation step is needed that applies either the relation
EC or DC between these objects, dependent of the
actual quantitative distance. This implies that the spa-
tial configuration of real world objects with RCC-8
relations inherently contains distance information. If
regions touch (EC), overlap (PO), or are contained in
the region of the robot (NT PP, T PP), the robot and a
target object are typically near enough to interact.
Considering obstacles as well, the previously
computed orientation and the inherent distance (CDC
and RCC-8 relations) can be used. Seen from the
topological angle, an object can only be between two
other objects, if the corresponding region of the object
is connected with the other two regions. Thus, using
RCC-8 one considers the robot’s region, the target ob-
ject’s region, and the region of a potentially obstacle
O. If O’s region is related to the other two regions
with the relation PO, NT PP, or T PP one can assume
that O is really an obstacle. With CDC an object O
might be an obstacle if, considered from the target
object, it is in the same direction like the robot is and,
considered from the robot, O is in the same direction
as the target object. For example, if the robot is east
of the target object (Robot E Target) (thus, also holds
(Target W Robot)), then an object that is also east of
the target and west of the robot might be in-between,
thus, might be an obstacle. However, such inference
might not be correct, because the calculi only consid-
ers two dimensions or the robot might find a plan for
grasping the target although an object is in-between.
In summary, it is possible to evaluate the interaction
ability with spatial qualitative relations, although it
might be uncertain.
5 EVALUATION OF THE
APPROACH
For the evaluation of our conceptual approach, an
architecture was implemented that combines the
needed inference technologies. As underlying sys-
tem serves the Event Transaction Logic Inference
System (ETALIS) (Anicic, D. and Fodor, P., 2011),
that implements a complex event processing (CEP)
framework on the basis of Prolog. Event process-
ing enables the processing of continuous data streams,
which in our case are created through the sensors of
a robot. The use of Prolog allows not only the gen-
eration of complex events from simple events (like
it is possible with traditional CEP systems) but also
to make strong logically rooted conclusions and in-
ferences about the events, their context, or other for-
mulated predicates. In comparison to other CEP-
systems, which are implemented with procedural or
object-oriented languages, ETALIS is more flexible
and has partially better performance results (Anicic
et al., 2010). Starting with ETALIS, we combine
it with the PROLOG-OWL interface THEA2 (Vas-
siliadis, V. and Mungall, C., 2012), and DL reason-
ers (like Pellet (Clark and Parsia, LLC, 2011), Racer
(Racer Systems GmbH Co. KG, 2011), or HermiT
(Motik, B. and Shearer, R. and Glimm, B. and Stoi-
los, G. and Horrocks, I., 2011)) to our system called
ETALIS-Spatial.
The knowledge representation of our system is re-
alized with an OWL2 knowledge base. Objects and
spatial relations are defined as described in Section 3.
THEA2 enables access to the ABox for extracting and
including spatial relations and all instances for partic-
ipating objects.
Processing with ETALIS-Spatial starts from sen-
sor data, which is assumed to be already mapped
from quantitative values to qualitative values. The
input consists of identified objects and their direct
spatial relations. This preprocessing could also be
done in principal by the Prolog engine. Typical ex-
amples for input data (primitive events) are asser-
tobject(Plate1) for an recognized object, assertRela-
tion(Plate1, Plate2, DC) for establishing a spatial re-
lation, robotMoved, for a finalized movement of the
robot. A robotMoved event triggers the new com-
putation of spatial relations between the robot and
other objects. Such input is continuously streamed
into the system. After asserting a bunch of new data
the system starts a consistency test of the knowledge
base and furthermore infers new relations if possible
(marked e.g. as foundRelation(Plate2, Cup4, EC)).
Complex events represent the output of this computa-
tion, e.g. interactable(Plate2) for indicating that the
SupportingMobileRobot'sTasksthroughQualitativeSpatialReasoning
397