4.3 Limitations
Generally, the number of evaluated patterns and the
number of participants should be increased in order
to increase the significance level.
Additionally, an outlier had to be removed from
the results. The ratings of the respective person dif-
fered very much from all other ratings. We assume
that this participant misinterpreted the rating scale or
that there was a lack of motivation which resulted in
less accuracy. However, this outlier was in the con-
trol group and thus our interpretation of the metrics
correlation has not been influenced.
The selection of patterns for the survey may also
be seen critically. We selected the patterns based on
our assumption which of them are accessible and un-
derstandable by participants who aren’t domain users.
Additionally, the sequence of patterns in the evalua-
tion process has not been randomized. However, there
was no evidence of a learning effect during the evalua-
tion of patterns. The componency pattern for example
was evaluated in both groups. At first position in the
test group and at second position in the control group.
It showed better evaluations in the test group. Con-
sidering learning effects, one would intuitively expect
the opposite.
5 CONCLUSIONS AND FURTHER
WORK
The goal of this work was to investigate the possi-
bility to apply ontology quality metrics on Content
ODPs and to validate such metrics. Table 3 as a re-
sult shows metrics that can be calculated for Contend
ODPs and that have a significant correlation with en-
gineering principles. Additionally, we found some
ambiguities in metric calculation procedures that need
to be considered in order to make metric based quality
statements comparable.
For future work, the points listed in section 4.3
need to be addressed. Furthermore, it seems to be
worthwhile to investigate correlations between met-
rics and user ratings in more detail. The validation
of additional metrics may be worthwhile too. A tool
support for the selected metrics seems to be desirable
for both, practice and further research.
REFERENCES
Baud, R., Rodrigues, J.-M., Wagner, J., Rassinoux, A.-M.,
Lovis, C., Rush, P., and Trombert-Paviot, B. (1997).
Validation of concept representation using natural lan-
guage generation. AMIA Annu Fall Symp, (841).
Blomqvist, E. (2009). Semi-automatic Ontology Construc-
tion based on Patterns. PhD thesis, Link
¨
oping Uni-
versity, Link
¨
oping.
Blomqvist, E. and Sandkuhl, K. (2005). Patterns in ontol-
ogy engineering – classification of ontology patterns.
In Proc. 7th International Conference on Enterprise
Information Systems, Miami.
Duque-Ramos, A., Fernandez-Breis, J., Stevens, R., and
Aussenac-Gilles, N. (2011). Oquare: A square-
based approach for evaluating the quality of ontolo-
gies. Journal of Research and Practice in Information
Technology, 43(159).
Gangemi, A. (2005). Ontology design patterns for seman-
tic web content. In The Semantic Web ISWC 2005,
volume 3729 of Lecture Notes in Computer Science.
Springer.
Gangemi, A., Catenacci, C., Ciaramita, M., and Lehmann,
J. (2005). Ontology evaluation and validation: an
integrated formal model for the quality diagnostic
task. Technical report, Laboratory of Applied On-
tologies – CNR, Rome, Italy. http://www.loa-cnr.it/
Files/OntoEval4OntoDev Final.pdf.
Gangemi, A. and Presutti, V. (2009). Ontology design pat-
terns. In Staab, S. and Studer, D., editors, Handbook
on Ontologies, International Handbooks on Informa-
tion Systems. Springer, Berlin Heidelberg.
Gorovoy, V. and Gavrilova, T. (2007). Technology for
ontological engineering lifecycle support. Interna-
tional Journal “Information Theories & Applica-
tions”, 14(19).
Gruber, T. (1993). A translation approach to portable on-
tology specifications. Knowledge Acquisition, 5:199–
220.
Guarino, N. and Welty, C. (2009). An overview of onto-
clean. In Staab, S. and Studer, D., editors, Handbook
on Ontologies, International Handbooks on Informa-
tion Systems, pages 201–220. Springer, Berlin Hei-
delberg.
Horridge, M., Parsia, B., and Sattler, U. (2009). xplain-
ing inconsistencies in owl ontologies. In Godo, L.
and Pugliese, A., editors, Scalable Uncertainty Man-
agement, Lecture Notes in Computer Science, pages
124–137. Springer, Berlin Heidelberg.
Maedche, A. and Staab, S. (2002). Measuring similarity be-
tween ontologies. In G
´
omez-P
´
erez, A. and Benjamins,
V., editors, Knowledge Engineering and Knowledge
Management, pages 251–263. Springer, Berlin Hei-
delberg.
Puppe, F. (2000). Knowledge formalization patterns. In
Proceedings of PKAW 2000. Sydney.
Seyed, A. (2012a). Integrating ontoclean’s notion of unity
and identity with a theory of classes and types - to-
wards a method for evaluating ontologies. In Donelly,
M. and Guizzardi, G., editors, Formal Ontology in In-
formation Systems - Proceedings of the Seventh Inter-
national Conference (FOIS 2012), Graz. IOS Press.
Seyed, A. (2012b). A method for evaluating ontologies -
introducing the bfo-rigidity decision tree wizard. In
KEOD2013-InternationalConferenceonKnowledgeEngineeringandOntologyDevelopment
56