4.3 Running the Evaluations
In this paper, we aim to evaluate conformance and interoperability using as interchange
languages RDF(S), OWL Lite and OWL DL. To this end, we will use three different test
suites that contain synthetic ontologies with simple combinations of knowledge model
components from these languages.
The RDF(S) and OWL Lite Import Test Suites are described in [7] and the OWL DL
Import Test Suite is described in [8]. These test suites have been defined similarly in a
manual way; the main difference between them is that the OWL DL test suite has been
generated following a keyword-driven process that allows obtaining a more exhaustive
test suite (with 561 tests compared to the 82 tests of the other test suites).
The evaluations described above are part of the evaluation services provided by the
SEALS Platform
4
, a research infrastructure that offers computational and data resources
for the evaluation of semantic technologies; the mentioned test suites are also included
in that platform. Once a tool is connected to the SEALS Platform, the platform can
automatically execute the conformance and interoperability evaluations. We connected
six well-known tools to the platform and by means of the SEALS Platform executed
the required conformance (for every tool and using every test suite) and interoperability
(for every tool with all the other tools and using every test suite) evaluations.
The six tools evaluated were three ontology management frameworks: Jena (version
2.6.3), the OWL API (version 3.1.0 1592), and Sesame (version 2.3.1); and three ontol-
ogy editors: the NeOn Toolkit (version 2.3.2 using the OWL API version 3.0.0 1310),
Prot
´
eg
´
e OWL (version 3.4.4 build 579), and Prot
´
eg
´
e version 4 (version 4.1 beta 209
using the OWL API version 3.1.0 1602). As can be seen, sometimes tools use ontology
management frameworks for processing ontologies.
5 Conformance Results
This section presents the conformance results for the six tools evaluated. Table 1 presents
the tool conformance results for RDF(S), OWL Lite and OWL DL, respectively
5
. The
tables show the number of tests in each category in which the results of a test can be
classified, depending on whether the original and the resultant ontologies are the same
(SAME), are different (DIFF), or the tool execution fails (FAIL).
As can be observed in these tables, Jena and Sesame present no problems when pro-
cessing the ontologies included in the test suites for the different languages. Therefore,
no further comments will be made on these tools.
Besides, as previously mentioned, the NeOn Toolkit and Prot
´
eg
´
e 4 use the OWL
API for ontology management.
The version of Prot
´
eg
´
e 4 evaluated uses a version of the OWL API that is almost
contemporary to the one we evaluated. Hence, after analysing the results of Prot
´
eg
´
e 4 we
4
http://www.seals-project.eu/seals-platform
5
The tool names have been abbreviated in the tables: JE=Jena, NT=NeOn Toolkit, OA=OWL
API, P4=Prot
´
eg
´
e 4, PO=Prot
´
eg
´
e OWL, and SE=Sesame.
6
Not counting additions of owl:Ontology.
7
Not counting additions of owl:NamedIndividual.
17