Authors:
Mounira Harzallah
1
;
Giuseppe Berio
2
;
Toader Gherasim
1
and
Pascale Kuntz
1
Affiliations:
1
University of Nantes, France
;
2
University of Bretagne Sud, France
Keyword(s):
Ontology, Ontology Quality Problems, Quality Problems Detection, Ontology Quality Evaluation, Automatically Generated Ontology.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Data Engineering
;
Enterprise Information Systems
;
Information Systems Analysis and Specification
;
Knowledge Acquisition
;
Knowledge Engineering and Ontology Development
;
Knowledge-Based Systems
;
Ontologies and the Semantic Web
;
Ontology Engineering
;
Symbolic Systems
Abstract:
Ontologies play a major role in the development of personalized and interoperable applications. However, validation of ontologies remains a critical open issue. Validation is fundamentally driven by an “ontology evaluation”, often referred to as “quality evaluation”, as better explained in the Introduction. This paper reports an experience designed on our previous work on quality evaluation and using ontologies automatically generated from some textual resources. In the previous work, we have proposed a standard typology of problems impacting (negatively) on the quality of one ontology (named quality problems). The experience shows how our previous work can be practically deployed. One a posteriori analysis of experience results and lessons learnt presented in the paper make explicit and concrete key contributions to validation. Finally, conclusions highlight both limitations of the experience and research perspectives.