lated ontologies. This allows a separation between the
problem and application descriptions and the work-
flow mechanism. As a result, the virtual workflow
machine may work in different problem domains if
the problem and application descriptions are changed.
Consequently, this will promote reusability and pro-
vide a conceptualisation that can be used between dif-
ferent domain experts, such as marine biologists, im-
age processing experts, user interface designers and
workflow engineers. These ontologies are also piv-
otal for reasoning. For instance, in the selection of
optimal VIP software modules, the Capability Ontol-
ogy is used to record known heuristics obtained from
VIP experts.
The Goal Ontology contains the high level ques-
tions posed by the user interpreted by the system as
VIP tasks, termed as goals, and the constraints to the
goals. Based on a mapping between the user require-
ments and a high level abstraction of the capabilities
of the VIP modules, we have constructed the Goal
Ontology. To date, the Goal Ontology contains 52
classes, 85 instances and 1 property. Figure 3 shows
the main concepts derived in the F4K domain. Under
these general concepts, more specific goals may be
defined, for example ‘Fish Detection’, ‘Fish Track-
ing’, ‘Fish Clustering’, ‘Fish Species Classification’
and ‘Fish Size Analysis’. The principle behind keep-
ing the top level concepts more general is to allow the
ontology to be easily extended to include other (new)
tasks as appropriate over time.
Figure 3: Top level goals in the Goal Ontology.
The Video Description Ontology describes the
concepts and relationships of the video and image
data, such as what constitutes the data, the acquisition
conditions such as lighting conditions, colour infor-
mation, texture, environmental conditions as well as
spatial relations and the range and type of their val-
ues. The main class of this ontology is the ‘Video
Description’ class. Example video descriptions are
visual elements such as video/image’s geometric and
shape features, e.g. size, position and orientation and
non-visual elements (acquisitional effects) such as
video/image’s brightness (luminosity), hue and noise
conditions. Environmental conditions, which are ac-
quisitional effects, include factors such as current ve-
locity, pollution level, water salinity, surge or wave,
water turbidity, water temperature and typhoon. The
Video Description Ontology has 24 classes, 30 in-
stances and 4 properties at present.
The Capability Ontology (Figure 4) contains the
classes of video and image processing tools, tech-
niques and performance measures of the tools with
known domain heuristics. This ontology has been
used to identify the tools that should be selected for
workflow composition and execution of VIP tasks
(Nadarajan et al., 2013). The main concepts of this
ontology are ‘VIP Tool’, ‘VIP Technique’ and ‘Do-
main Descriptions for VIP Tools’. Each VIP tech-
nique can be used in association with one or more
VIP tools. A VIP tool is a software component that
can perform a VIP task independently, or a func-
tion within an integrated vision library that may be
invoked with given parameters. ‘Domain Descrip-
tion for VIP Tool’ represent a combination of known
domain descriptions (video descriptions and/or con-
straints to goals) that are recommended for a subset
of the tools. This was used to indicate the suitability
of a VIP tool when a given set of domain conditions
hold at a certain point of execution. The Capability
Ontology has been used for reasoning during work-
flow composition using planning. As planning takes
into account preconditions before selecting a step or
tool, it will assess the domain conditions that hold to
be used in conjunction with an appropriate VIP tool.
The Capability Ontology has been populated with 42
classes, 71 instances and 2 properties.
For ontology development and visualisation pur-
poses, OWL 1.0 (McGuinness and van Harmelen,
2004) was generated using Protege version 4.0.
Where applicable, ontological diagrams were derived
using the OntoViz plugin (Sintek, 2007). They have
supported the first version of the workflow system
that has been evaluated for efficiency, adaptability and
user learnability in video classification, fish detection
and counting tasks in a single-processor (Nadarajan
et al., 2011).
More recent development and preparation of the
KEOD2013-InternationalConferenceonKnowledgeEngineeringandOntologyDevelopment
422