A Proposal for an Ontology Metrics Selection Process
Achim Reiz
a
and Kurt Sandkuhl
b
Rostock University, 18051 Rostock, Germany
Keywords: Ontology Metrics, NEOntometrics, Ontology Quality, OntoMetrics, Knowledge Engineering.
Abstract: Ontologies are the glue for the semantic web, knowledge graphs, and rule-based intelligence in general. They
build on description logic, and their development is a non-trivial task. The underlying complexity emphasizes
the need for quality control, and one way to measure ontologies is through ontology metrics. For a long time,
the calculation of ontology metrics was merely a theoretical proposal: While there was no shortage of pro-
posed ontology metrics, actual applications were mostly missing. That changed with the creation of NEOn-
tometrics, a tool that implemented the majority of ontology metrics proposed in the literature. While it is now
possible to calculate large amounts of ontology metrics, it also revealed that the calculation alone does not
make the metrics useful (yet). In NEOntometrics alone, there are over 160 ontology metrics – a careful selec-
tion for the given use case is crucial. This position paper argues for a selection process for ontology metrics.
It first presents core questions for identifying the underlying ontology requirements and then guides users to
identify the correct attributes and their associated measures.
1 INTRODUCTION
Ontologies are central to sharing meaning between
different human and computational actors. They are
at the foundation of the semantic web and knowledge
graphs and enable the alignment of different
terminologies, to encode business rules or formally
describe a domain to the computer. They have the
potential to break down data silos and make implicit
knowledge explicit through inference machines.
Developing ontologies, however, is not a trivial task:
The world wide web consortium standardized (w3c)
the web ontology language OWL, which builds on
description logic. It provides sophisticated features to
formalize classes and relations, and mostly, there is
not one right way to model a domain, but there are
many ways to skin a cat.
The complexity and high degree of freedom puts
quality control activities at the forefront.
Automatically calculated metrics offer an objective
and quick view on ontologies. They show an
abstraction of the inner fabrics, which can detect
potential irregularities and track the development
progress overall (Vrandečić & Sure, 2007).
a
https://orcid.org/0000-0003-1446-9670
b
https://orcid.org/0000-0002-7431-8412
While the usefulness of ontology metrics is
undoubted, the past years had an implementation gap.
While many ontology metrics and frameworks were
proposed, e.g., by (Gangemi et al., 2006), (Tartir et
al., 2005), or (Burton-Jones et al., 2005), for a long
time, there was just minimal tool support for bringing
these frameworks into use. That changed with the
introduction of Ontometrics (Lantow, 2016) and its
predecessor, NEOntometrics (Reiz & Sandkuhl,
2022b), which calculates most of the proposed
ontology metrics. With these developments, ontology
metrics found applications in corporations (Reiz et al.,
2020) and research projects (Blagec et al., 2022;
Rocha et al., 2020).
With the necessary tools, a new challenge arose:
The number of metrics for ontology developers,
especially inexperienced ones, is overwhelming.
OntoMetrics calculates 81 ontology measures and
NEOntometrics over 160. Knowledge engineers need
a small subset of key performance indicators (KPIs)
that quickly tell the main development aspects for a
given ontology. At the same time, ontologies are too
heterogenous to derive standard rules on their
development, and there are no sets of metrics that are
helpful for everybody (Reiz & Sandkuhl, 2022a).
Reiz, A. and Sandkuhl, K.
A Proposal for an Ontology Metrics Selection Process.
DOI: 10.5220/0011983800003467
In Proceedings of the 25th International Conference on Enterprise Information Systems (ICEIS 2023) - Volume 1, pages 583-590
ISBN: 978-989-758-648-4; ISSN: 2184-4992
Copyright
c
2023 by SCITEPRESS Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0)
583
This position paper argues that metric selection is
crucial for bringing ontology metrics into use. One
can develop document types like taxonomies,
glossaries, or data models that are all legitimately
called ontologies and conform to the OWL standard.
Their application scenarios, users, and underlying
strategy are likely vastly different, and so is the
required evaluation. This position paper presents a
process for first answering core questions to identify
the ontology requirements and then selecting the right
metrics.
This contribution is structured as follows: The
next section describes the challenges in ontology
evaluation and metric selection, section three
proposes a methodology for metric selection,
followed by a conclusion.
2 SELECTING METRICS:
A NON-TRIVIAL TASK
The upcoming section first presents the variety of
document types and how they can be modeled with
ontology languages. Afterward, the section argues for
ontology evaluation using ontology metrics and
recapitulates existing metric frameworks. At last, the
heterogeneity of ontology development processes
motivates the creation of the metric selection process.
2.1 One Ontology, Many Possible
Document Types
There is no scientific dispute on the definition of
computational ontologies. It is an “… explicit
specification of a conceptualization”, according to
the highly cited paper by (Gruber, 1993). The
standardizations of these artifacts are also settled with
the recommendation for the web ontology language
(OWL) and RDF Schema (RDFS)
1
by the world wide
web consortium (w3c).
These technologies can cover many different
application scenarios. Figure 1 categorizes document
types and technologies along a formality scale and
according to document categories. OWL and RDFS
allow knowledge engineers to develop highly
sophisticated interconnected graphs that maximize
the inferring of hidden facts. However, building only
a rudimentary glossary, a loose collection of words
with human-centered annotations that do not
incorporate other logical meanings, is also possible.
Both can adhere fully to the standard and can be
measured using ontology metrics. As the purpose of
the ontologies probably differs widely, it is no use
to qualify either as better or worse. An ontology that
is meant to be a taxonomy has a different goal than
one that shall be a data model. Both should not be
treated in the same way, and a set of metrics that
works for the first ontology will likely not work for
the second one.
Figure 1: The various document types and their category. Figure adapted and extended from (Uschold & Gruninger, 2004).
1
https://www.w3.org/TR/owl2-overview/,
https://www.w3.org/TR/rdf-schema/
ICEIS 2023 - 25th International Conference on Enterprise Information Systems
584
Table 1: Description and formal implications of the document types of Figure 1.
Document Type Definition/Description Cumulative Attributes
Terms Concepts and Relationships. None.
"Ordinary" Glossary
List of words relating to a specific subject, text, or
dialect, with explanations; a brief dictionary.
Further information (e.g., labels).
Ad-Hoc Hierarchy
Extensive, deep hierarchy with links to further
resources (Labrou & Finin, 1999).
Adding hierarchy and links.
Data Dictionaries (EDI)
An inventory that specifies the source, location,
ownership, usage, and destination of all of the data
elements that are stored in a database (Institute for
Telecommunication Sciences [ITS], 2001).
Add meta-information, source,
and destination of data elements.
Thesaurus
List of related word groups, organized by a
combination of attributes. Entries include synonyms.
(Cross & Pal, 2005, p. 449).
Equivalent-relations.
Structured Glossary /
Directory
Access one piece of information using another.
Taxonomic relationships might exist (e.g., is-a). Even
though relations exist, its structure is relatively flat.
More expressive relations between
entities, basic structure (though
not formalized).
XML DTDs
XML Document Type Declaration, defines
hierarchical structures including identifiers, attributes,
and entities (Hitzler et al., 2008).
Allowing strict structure
definition, with basic cardinalities.
Informal Hierarchy
(Folksonomy)
The users tag information onto items. An interlinked
hierarchy can be created using statistical evaluation.
Interlinked hierarchy.
DB Schema
Formalizing non-typed relations between structured
data (Curtis & Cobham, 2008).
Definition of formal relations, data
types.
XML Schema
Like DTD, with exact cardinality and data type
support, and unique ID keys, reusing schemas is
possible through imports (Fallside & Walmsley, 2004).
Exact cardinalities & data types.
Data Models (e.g., UML,
step)
A conceptual data model containing typed relations
between objects (Curtis & Cobham, 2008).
Typed relations.
Formal Taxonomies Machine-readable structure with interlinked objects. Machine-readable links.
Propositional Calculus
Formal algebra, declaration of facts (Lifschitz et al.,
2008).
Decidable: Computer can infer if
statements are valid.
Description Logic
Formal algebra. Can be viewed as a decidable subset of
first-order logic (Baader et al., 2008; Bruijn &
Heymans, 2008).
Decidable, enables automatic
inference algorithms.
First-Order Logic
Formally based logic algebra with quantifiers and
relations. (Bruijn & Heymans, 2008; Lifschitz et al.,
2008).
Non-decidable
2.2 Ontology Evaluation Methods and
Why to Choose Metrics
(Tankeleviǧiene & Damaševičius, 2009) collected
three definitions for quality in ontologies, namely the
“conformance to requirements”, “fitness to use”, and
“the totality of features and characteristics of a
software product that bear on its ability to satisfy
stated or implied needs”. As a result, the ontology
fulfills a function, and the quality reflects how well it
can fulfill this function.
In his state-of-the-art, (Raad & Cruz, 2015)
collected the various ontology evaluation methods,
namely gold-standard, corpus-, task-, and criteria-
based. Gold-standard-based approaches are best
suited to evaluate ontology mapping or learning.
They compare a created ontology to a reference
considered “perfect”. Corpus-based evaluations
assess the coverage of a given domain by assessing a
learned ontology with the content of a text-corpus,
while task-based assessments measure an ontology's
ability to fulfill a given task, regardless of the
structural characteristics. Criteria-based approaches
regard the construction of the ontology on structural
or complex meta-logical attributes. While the first can
be performed automatically, the latter is mainly based
on an expert evaluation.
A Proposal for an Ontology Metrics Selection Process
585
The first two evaluation methods are a good fit for
ontology learning but challenging to apply beyond.
Likely, an ontology is not modeled according to a text
corpus, and a gold standard does not exist. Task-
based approaches require the consideration of an
application context on which the performance of an
ontology is evaluated. Thus, the evaluation
methodologies need to be highly customized and are
different to scale to a broader audience. Criteria-
based approaches build on the inherent structure that
every ontology has. Complex approaches that build
on meta-logical consistency need the human
intervention of skilled knowledge engineers.
Approaches building on measuring structural
attributes can be calculated automatically for every
ontology regardless of the usage context. That makes
it highly scalable and a solution that is easy to
implement. However, their influence on the
individual notion of quality is not as easy to assess as
for the other methodologies, as the attributes are
rather abstract, and e.g., do not consider the fitness to
fulfill a task. The selection of the proper measures is
cumbersome, and the interpretation guidelines of the
metric frameworks, if there are any, are highly
generalized and possibly not applicable to the use
case at hand. Without interpretation, the measures
mostly remain arbitrary to the metric consumer.
2.3 The Questionable Promises of too
Easy Solutions
The details of the descriptions in the various ontology
metric frameworks differ significantly. oQual by
(Gangemi et al., 2005) provides only minimal textual
guidance. More detailed descriptions and advice for
the metric interpretation are offered in OntoQA by
(Tartir et al., 2005).
The most holistic approach is proposed by
(Duque-Ramos et al., 2011) in the OQuaRE
framework. It defines 18 metrics and links these to
quality characteristics like usability or adaptability. It
firmly guides the metric interpretation by giving
predefined scores from 1 (worst) to 5 (best),
depending on the resulting value ranges. However,
research has shown that the quality scores fail to
capture the reality of modeled ontologies. A study of
4094 ontologies (Reiz & Sandkuhl, 2023) identified
that many measures are heavily tilted toward the best
or worst grades. For seven of the 18 metrics, more
than 80% of the ontologies are at these extreme
values. Only five of the measures are somewhat
evenly distributed, and none of the metrics show a
gaussian curve, even though quality typically follows
such a pattern, with only some being best or worst and
most being in the middle.
Further research collected empirical evidence for
the highly heterogenous development processes of
ontologies. In a study by (Reiz & Sandkuhl, 2022a)
on the evolutional process of 69 dormant ontologies,
the authors found no evidence that these artifacts
share standard development processes. Conversely,
common assumptions could not be supported, e.g.,
ontologies get larger and more complex with
increasing maturity.
Regarding the selection of ontology metrics, it
supports the notion that ontologies are too
heterogenous that a simple set of measures can
capture the individual requirements of all or the
majority of knowledge engineers. Only one
framework has claimed this achievement, but with
questionable validity. While extensive descriptions of
proposed metrics like in OntoQA are still helpful,
there is no silver bullet for ontology evaluation, and
metrics must be carefully selected and interpreted.
3 SELECTING METRICS FOR
QUALITY CONTROL
Ontology metrics measure structural attributes.
Quality, however, is highly dependent on one’s
individual requirements. Before a knowledge
engineer can use ontology metrics to assess
something similar to quality, it is necessary to match
these requirements with attributes used to fulfill the
needs and metrics that measure these attributes.
Giving a set of requirements alone, e.g., in the form
of competency questions, does not tell much about the
used or required formalizations, as there are almost
countless options to create a model.
If an ontology is created from scratch, and the
evaluation is considered from the start, a top-down
approach can work. Here, the types of used attributes
and their formalizations are determined prior to the
development of the ontology.
More likely, though, the artifact reuses existing
ontologies and already has a development history. In
these cases, a bottom-up evaluation is better suited.
Analyzing the existing ontology and its structure, one
tries to derive the attributes that best capture its
developments.
Questioning the goal of the ontology and the
evaluation can aid the top-down and bottom-up
approaches. Some core questions are depicted in
Table 2, but given the individual circumstances, this
list might be non-exhaustive. With these
ICEIS 2023 - 25th International Conference on Enterprise Information Systems
586
considerations, one can identify and map the required
attributes to corresponding ontology metrics.
3.1 Initiating a Top-down
Metric-Driven Ontology
Development
The long-term strategy is the first thing to consider
at the beginning of a new ontology development
process. Is there just one imminent use, also for the
long term? Alternatively, should other goals, like the
alignment with other ontologies or applications, be
considered? If that is the case, these design decisions
should be undertaken at the very beginning, e.g., by
analyzing the future requirements and the ontologies
that need to be integrated. If strategic goals exist, e.g.,
for using specific constructs, these attributes can be
selected early on.
Table 2: Core questions for metric selection.
Q Question Identify…
1
What is the long-term
strategic goal?
future integrations
2 What is modeled?
the document
category
3
Which applications
use the ontology?
how and where the
ontology is used
4 How is it modeled?
the used
formalizations
5
Who is the
consumer?
the required
information for the
stakeholders
The second question regards what kind of
document category and type the ontology should
have. Here, Table 1 provides support in categorizing
the possible varieties. The used attributes in an
ontology depend on the kind of document modeled
(cf. Figure 2).
Q3 considers the application landscape. The
applications likely have requirements regarding the
functionality, data structures, and interfaces that the
ontology needs to provide. These constructs can be
mapped to the underlying attributes to track whether
the required constructs are being built.
Now, more concrete planning can begin, and the
question of how to model the actual ontology arises.
The first three questions identified many
requirements and attributes that can be used for the
given purposes. However, there are many ways to
instantiate the model. This step will probably
disregard some of the previous considerations and
attributes. At the end of working on question 4, there
should be a list of the used and probably measured
attributes. Finding the right formalizations can be
guided by Table 1 and Figure 2.
At last, the attributes are selected and edited for
the actual metric consumer. At this step, the mapping
of attributes to metrics is carried out. One ontology
might have more than one stakeholder and likely
require different views. For example, a manager
might be more interested in measures giving an
overview like annotations to classes ratio. Two
knowledge engineers working on different aspects,
e.g., the structure and the annotations, need other
metrics, e.g., graph-related measures and the count of
annotations and classes. Giving the right metrics to
the right persons enables them to aid the specific tasks
they are working on. More information on this
selection process is to be found in section 3.3.
Figure 2: Possible relations between attributes, owl formalizations, and the document categories.
A Proposal for an Ontology Metrics Selection Process
587
3.2 A Bottom-up Approach for
Applying Metrics to Existing
Developments
The best-case scenario is an early thorough strategic
planning of future ontology developments, including
evaluation. More likely, though, is an evaluation
scenario for an already existing artifact. However, the
strategic questions presented in Table 2 still need to
be answered for selecting and using ontology metrics.
These questions, though, need to be interpreted in
light of the development decisions already
undertaken. Figure 3 depicts a proposal for a decision
funnel of a bottom-up metric selection process.
At first, an initial look at a large body of metric
data shows the axioms used in the ontology. The goal
is to identify the implemented structural attributes in
the ontology and those that need not be further
considered. Thus, it answers the How.
Afterward, the next phase considers the existing
and planned technical integrations. It answers which
of the given measures are necessary for the
corresponding application and derives possible
functional requirements – e.g., every class needs a
relation or the necessity for object property
characteristics.
The usage context now considers the what.
Understanding the document category and type based
on the previous questions and the indicating attributes
of Table 1 answers which kind of ontology has been
developed based on the empirical observations.
Figure 3: Proposal for a bottom-up metric selection process.
The last step reflects the strategic objectives of
the ontology. While the driving questions are the
same as for the previously described top-down
process, the ontology requirements can now be
matched with the already modeled reality. In the best
case, the modeled ontology already fulfills all the
requirements. However, the strategic evaluation can
also reveal the necessity to restructure the given
artifact, e.g., add more information or delete obsolete
elements.
At last, the actual metric selection should take
place. Depending on the outcome of the strategic
evaluation, metrics should be selected that best
capture the progress of future development or, if
applicable, the restructurings. Likely, an ontology has
more than one set of KPIs, depending on the different
consumers.
3.3 Selecting and Interpreting the
Selected KPIs
Having answered the core-questions, actual metrics
need to be selected. Many frameworks propose
various measures, and a missing accepted vocabulary
for measures leads to heterogeneous definitions:
Sometimes, the frameworks propose different names
for the same measured elements. The metric ontology
by (Reiz & Sandkuhl, 2022c) describes the various
frameworks in a joint and formalized terminology.
This ontology and the frontend implementation
Metric Explorer in NEOntometrics (Reiz &
Sandkuhl, 2022b) can guide the identification of
relevant measures.
While the metric selection is a mandatory step for
using ontology metrics, these measures still need to
be interpreted. Some of the answered core questions
can be translated to value boundaries. For example, if
every class needs to have an annotation, the
annotations to classes ratio should be above 1. The
historical development of the given measures
indicates whether the ontology evolves in favor of the
set goals.
Some of the measures might not be translatable to
fixed, desired values. Here, the historical data gives
information, for example, whether the ontology gets
more extensive, interconnected, or thoroughly
annotated. It allows an assessment of whether the
development efforts are aligned with the set goals.
Further interesting for knowledge engineers is
aligning expectations and reality for a made change.
At times, a new ontology version has unintended side
effects. Examples are unrecognized restructurings by
moving or deleting classes. The numeric difference
between versions can reveal hidden consequences or
unintended alterations.
In this light, observing values that must not be
changed is also helpful. Taking the document types of
Figure 2 and Table 1, an ontology developed as a
document type data dictionary should not have
complex, formally described object properties. If
ICEIS 2023 - 25th International Conference on Enterprise Information Systems
588
these values suddenly occur, it can indicate a drift of
ontology development and strategy.
4 CONCLUSIONS
Ontology languages like OWL and RDFS give
knowledge engineers enormous freedom to model
almost any document type or domain. This freedom,
however, makes it difficult to assess the quality of an
ontology. When developing an explicit specification
of a conceptualization, there is more than one way to
skin a cat, and the ontology has to be understood with
the environment it needs to fit into and its strategic
goals.
While ontology metrics offer an objective and
reproducible assessment, selecting the right metrics
for the given use case is cumbersome and non-trivial.
In this position paper, we argue for a metric selection
process. The requirements for an ontology can be
identified and mapped to ontology metrics using core
questions to evaluate an ontology's technical, usage,
and strategic setting. This process can be triggered
top-down, prior to an ontology development process,
or bottom-up for existing ontologies.
While proposals for ontology metrics are not a
recent idea, there was an implementation gap for a
long time, which was solved with the introduction of
OntoMetrics and NEOntometrics. The question of
how to put the metrics into use, however, remained.
We believe that the proposed metric selection
framework can ease the productive use of ontology
metrics for quality control and help knowledge
engineers use metrics to measure individual progress
toward self-set requirements and goals.
The next step for this research endeavor is
applying the depicted selection process in real-world
ontology development processes. We plan to do a
case study with an enterprise that uses ontology
metrics to help them select the right ontology metrics
for their staff.
REFERENCES
Baader, F., Horrocks, I., & Sattler, U. (2008). Chapter 3 De-
scription Logics. In Foundations of Artificial Intelli-
gence. Handbook of Knowledge Representation (Vol. 3,
pp. 135–179). Elsevier. https://doi.org/10.1016/S1574-
6526(07)03003-9
Blagec, K., Barbosa-Silva, A., Ott, S., & Samwald, M.
(2022). A curated, ontology-based, large-scale
knowledge graph of artificial intelligence tasks and
benchmarks. Scientific Data, 9(1), 322.
https://doi.org/10.1038/s41597-022-01435-x
Bruijn, J. de, & Heymans, S. (2008). On the Relationship
between Description Logic-based and F-Logic-based
Ontologies. Fundam. Inform., 82, 213–236.
Burton-Jones, A., Storey, V. C., Sugumaran, V., &
Ahluwalia, P. (2005). A semiotic metrics suite for as-
sessing the quality of ontologies. Data & Knowledge
Engineering, 55(1), 84–102. https://doi.org/10.1016/
j.datak.2004.11.010
Cross, V., & Pal, A. (2005). Metrics for ontologies. In
Nafips 2005: 2005 Annual Meeting of the North Amer-
ican Fuzzy Information Processing Society, Detroit, MI,
26-28 June, 2005 (pp. 448–453). IEEE.
https://doi.org/10.1109/NAFIPS.2005.1548577
Curtis, G., & Cobham, D. P. (2008). Business information
systems: Analysis, design, and practice (6. ed.). Pearson.
Duque-Ramos, A., Fernández-Breis, J. T., Stevens, R., &
Aussenac-Gilles, N. (2011). OQuaRE: A square-based
approach for evaluating the quality of ontologies. Jour-
nal of Research and Practice in Information Technol-
ogy, 43(2), 159–176. https://www.scopus.com/in-
ward/record.uri?eid=2-s2.0-84860428632&partnerID=
40&md5=dcb393aa78ee79eca9bfe365b38ed0f1
Fallside, D. C., & Walmsley, P. (Eds.). (2004). XML
Schema Part 0: Primer Second Edition. W3C.
https://www.w3.org/TR/xmlschema-0/
Gangemi, A., Catenacci, C., Ciaramita, M., & Lehmann, J.
(2006). Modelling Ontology Evaluation and Validation.
In Y. Sure (Ed.), Lecture notes in computer science: Vol.
4011, The semantic web: Research and applications:
3rd European Semantic Web Conference, ESWC 2006,
Budva, Montenegro, June 11 - 14, 2006 ; proceedings
(pp. 140–154). Springer. https://doi.org/10.1007/1176
2256_13
Gangemi, A., Catenacci, C., Ciaramita, M., Lehmann, J.,
Gil, R., Bolici, F., & Strignano Onofrio. (2005). Ontol-
ogy evaluation and validation: An integrated formal
model for the quality diagnostic task. Trentino, Italy.
Gruber, T. R. (1993). A translation approach to portable on-
tology specifications. Knowledge Acquisition, 5(2),
199–220. https://doi.org/10.1006/knac.1993.1008
Hitzler, P., Krötzsch, M., Rudolph, S., & Sure, Y. (2008).
Semantic Web. Springer. https://doi.org/10.1007/978-
3-540-33994-6
Institute for Telecommunication Sciences (ITS). (2001).
American National Standard T1.523-2001: Telecom
Glossary 2000. https://www.its.bldrdoc.gov/re-
sources/federal-standard-1037c.aspx
Labrou, Y., & Finin, T. (1999). Yahoo! as an ontology. In
S. Gauch (Ed.), Proceedings of the eighth international
conference on Information and knowledge management
- CIKM '99 (pp. 180–187). ACM Press.
https://doi.org/10.1145/319950.319976
Lantow, B. (2016). OntoMetrics: Putting Metrics into Use
for Ontology Evaluation. In J. Filipe, D. Aveiro, & J. L.
Dietz (Chairs), 8th International Conference on
Knowledge Engineering and Ontology Development,
Porto, Portugal.
A Proposal for an Ontology Metrics Selection Process
589
Lifschitz, V., Morgenstern, L., & Plaisted, D. (2008). Chap-
ter 1 Knowledge Representation and Classical Logic. In
Foundations of Artificial Intelligence. Handbook of
Knowledge Representation (Vol. 3, pp. 3–88). Elsevier.
https://doi.org/10.1016/S1574-6526(07)03001-5
Raad, J., & Cruz, C. (2015). A Survey on Ontology Evalu-
ation Methods. In A. Fred (Ed.), Proceedings of the 7th
International Joint Conference on Knowledge Discov-
ery, Knowledge Engineering and Knowledge Manage-
ment: Lisbon, Portugal, November 12 - 14, 2015 (pp.
179–186). SciTePress. https://doi.org/10.5220/00055
91001790186
Reiz, A., Dibowski, H., Sandkuhl, K., & Lantow, B. (2020,
November 2–4). Ontology Metrics as a Service
(OMaaS). In J. Filipe, D. Aveiro, & J. L. Dietz (Chairs),
12th International Conference on Knowledge Engi-
neering and Ontology Development, Budapest, Hun-
gary.
Reiz, A., & Sandkuhl, K. (2022a). Debunking the Stereo-
typical Ontology Development Process. In Proceedings
of the 14th International Joint Conference on
Knowledge Discovery, Knowledge Engineering and
Knowledge Management (pp. 82–91). SCITEPRESS -
Science and Technology Publications. https://doi.org/
10.5220/0011573600003335
Reiz, A., & Sandkuhl, K. (2022b). NEOntometrics – A Pub-
lic Endpoint For Calculating Ontology Metrics. In U.
Şimşek, D. Chaves-Fraga, T. Pellegrini, & S. Vahdat
(Eds.), Proceedings of Poster and Demo Track and
Workshop Track of the 18th International Conference
on Semantic Systems co-located with 18th International
Conference on Semantic Systems (SEMANTiCS 2022).
CEUR-WS.
Reiz, A., & Sandkuhl, K. (2022c). An Ontology for Ontol-
ogy Metrics: Creating a Shared Understanding of Meas-
urable Attributes for Humans and Machines. In Pro-
ceedings of the 14th International Joint Conference on
Knowledge Discovery, Knowledge Engineering and
Knowledge Management (pp. 193–199). SCITEPRESS
- Science and Technology Publications.
https://doi.org/10.5220/0011551500003335
Reiz, A., & Sandkuhl, K. (2023). A Critical View on the
OQuaRE Quality Framework. 24rd International Con-
ference, ICEIS 2021, Virtual Event, April 25–27, 2022,
Revised Selected Papers (Accepted for Publication).
Rocha, B. D., Silva, L., Batista, T., Cavalcante, E., &
Gomes, P. (2020). An Ontology-based Information
Model for Multi-Domain Semantic Modeling and Anal-
ysis of Smart City Data. In C. de Salles Soares Neto
(Ed.), ACM Digital Library, Proceedings of the Brazil-
ian Symposium on Multimedia and the Web (pp. 73–80).
Association for Computing Machinery. https://doi.org/
10.1145/3428658.3430973
Tankeleviǧiene, L., & Damaševičius, R. (2009). Character-
istics of domain ontologies for web based learning and
their application for quality evaluation [E-mokymui(si)
skirtos dalykinės srities ontologijos kokybes charakter-
istikos ir ju taikymas ontologijos kokybei vertinti]. In-
formatics in Education, 8(1), 131–152. https://doi.org/
10.15388/infedu.2009.09
Tartir, S., Arpinar, I. B., Moore, M., Sheth, A. P., & Ale-
man-Meza, B. (2005). OntoQA: Metric-Based Ontol-
ogy Quality Analysis. In D. Caragea, V. Honavar, I.
Muslea, & R. Ramakrishnan (Chairs), IEEE Workshop
on Knowledge Acquisition from Distributed, Autono-
mous, Semantically Heterogeneous Data and
Knowledge Sources, Houston.
Uschold, M., & Gruninger, M. (2004). Ontologies and se-
mantics for seamless connectivity. ACM SIGMOD Rec-
ord, 33(4), 58–64. https://doi.org/10.1145/1041410.10
41420
Vrandečić, D., & Sure, Y. (2007). How to Design Better
Ontology Metrics. In E. Franconi, M. Kifer, & W. May
(Eds.), Lecture notes in computer science: Vol. 4519,
The semantic web: Research and applications: 4th Eu-
ropean Semantic Web Conference, ESWC 2007, Inns-
bruck, Austria, June 3-7, 2007 ; proceedings (pp. 311–
325). Springer. https://doi.org/10.1007/978-3-540-
72667-8_23
ICEIS 2023 - 25th International Conference on Enterprise Information Systems
590