in (Gillespie et al., 2011), the five categories are
described in this paper as the following:
Human Actors. The representation of knowledge
that identifies various types of human users who in-
teract with an ODCS in some fashion (e.g. end-users,
software developers, domain-experts, etc.)
Compositional Units. The representation of knowl-
edge that identifies previously implemented pieces
of software that could be composed into a resultant
system (e.g. algorithms, web services, distributed
agents, etc.)
Workflow. The representation of knowledge that
identifies the process flow of different compositional
units to complete a given objective/task/goal (e.g. the
composition of a data aggregation script, statistical
model, and data plot module to complete a modelling
workflow).
Data Architecture. The representation of knowledge
that identifies the various forms of data sources and
specifications that could be input, output, or flow
through the resultant system and the individual com-
positional units within it (e.g. a CSV file containing
emergency department visit time-series data).
Physical Resources. The representation of knowl-
edge that identifies physical executional environments
that could systematically execute a constructed resul-
tant system by an ODCS (e.g. a personal computer
with a specific operating system or a supercomputer
with a large number of processors).
To complement the five categories of knowledge
depicted in Figure 1, three more conceptual consider-
ations are illustrated: human and system influences,
syntactic and semantic knowledge representation and
the relationships between the different categories of
knowledge.
A differentiation between syntactic and semantic
knowledge representation is illustrated in Figure 1.
Essentially, entities of knowledge that are described
as ”syntactic” would represent physical objects con-
sidered within an ODCS (e.g., algorithm, web service,
data source, data set, person, a computer server, etc.),
where ”semantic” knowledge entities would repre-
sent the ’realization’ of the syntactic entities (e.g.,
programming language, functional purpose, dimen-
sions/structure of data, human actor role, operating
system environment, etc.) In terms of semantic rep-
resentation, five sub-types can be considered: func-
tion, data, execution, quality, and trust. Gillespie et al
(2011) and Cardoso (2005) describe these further.
Finally, the framework identifies the relationships
between the categories of knowledge. These relation-
ships can also be described as either syntactic or se-
mantic.
3 UTILIZATION OF THE
FRAMEWORK FOR
ONTOLOGY EVALUATION
As stated in Section 2.3, the framework performs
as a tool to facilitate effective ontology engineering
methodologies for ODCS ontological knowledge. In
this section we suggest how the framework can be uti-
lized in the context of ontology evaluation by present-
ing a knowledge framework checklist. This checklist
can be applied by any ontology engineer who is in-
vestigating the ontological knowledge for an ODCS.
Following the work of (Brank et al., 2005),
(Vrande
ˇ
ci
´
c, 2009) provided a description of different
aspects of ontology evaluation. As discussed in sec-
tion 2.2, one of these aspect is context. Our focus
for this paper is to evaluate the adaptability of con-
text. Context is defined in terms of considering the
aspects of the ontology in relation to other variables
in its environment (Vrande
ˇ
ci
´
c, 2009). ODCS-specific
examples may include human influence, an applica-
tion using the ontology, a data source the ontology
describes, etc. Due to the high-level categorical repre-
sentation that the knowledge identification framework
provides, context is the aspect of ontology evaluation
that best fits our assessment.
An ontology evaluation is assessed by how
well a given aspect satisfies certain criteria/metrics
(Vrande
ˇ
ci
´
c, 2009). In terms of the knowledge frame-
work and the nature of ODCS applications, adapt-
ability is considered. In (Gillespie et al., 2011), we
argued that the framework can assist with questions
such as “How can ontological knowledge represented
in ODCS ‘A’ be utilized or integrated into the on-
tologies for ODCS ‘B’?”. Adaptability deals with
the extent to which the ontology can be extended
and/or specialized without breaking or removal of ex-
isting axioms (Vrande
ˇ
ci
´
c, 2009). Therefore, within
this ontology evaluation example we plan to assess
the adaptability of the context in a specific ODCS’s
ontologies.
3.1 Evaluation Checklist: A Concept
from Software Quality Assurance
Within the software engineering industry, long stand-
ing initiatives have been put in place for software
quality assurance (SQA) (International Standards Or-
ganization, 2001; McCall et al., 1977). One of the
main SQA standards calls for the development of soft-
UTILIZING A COMPOSITIONAL SYSTEM KNOWLEDGE FRAMEWORK FOR ONTOLOGY EVALUATION - A
Case Study on BioSTORM
169