Towards Semantic KPI Measurement
Kyriakos Kritikos
1
, Dimitris Plexousakis
1
and Robert Woitch
2
1
Information Systems Laboratory, ICS-FORTH, Greece
2
BOC, Austria
Keywords:
KPI, Semantics, Ontologies, Quality, QoS, Linked Data, Analysis, SPARQL
Abstract:
Linked Data (LD) represent a great mechanism towards integrating information across disparate sources. The
respective technology can also be exploited to perform inferencing for deriving added-value knowledge. As
such, LD technology can really assist in performing various analysis tasks over information related to business
process execution. In the context of Business Process as a Service (BPaaS), the first real challenge is to
collect and link information originating from different systems by following a certain structure. As such, this
paper proposes two main ontologies that serve this purpose: a KPI and a Dependency one. Based on these
well-connected ontologies, an innovative Key Performance Indicator (KPI) analysis system is then built which
exhibits two main analysis capabilities: KPI assessment and drill-down, where the second can be exploited to
find root causes of KPI violations. Compared to other KPI analysis systems, LD usage enables the flexible
construction and assessment of any KPI kind allowing experts to better explore the possible KPI space.
1 INTRODUCTION
Business processes (BPs) enable organisations to for-
mulate and realise internal and external procedures
which provide support or enable their core business.
Respective information systems and IT technology
then enables the execution and management of these
BPs to enable core service and product delivery. The
flexible BP management and optimisation is enabled
via a lifecycle comprising the four main activities
of design, allocation, execution and evaluation. The
first three activities focus on bridging the well-known
business-to-IT gap and enabling the BP execution.
The last activity facilitates deriving business intel-
ligence information via performing various analysis
tasks which can facilitate BP improvement, thus clos-
ing the aforementioned lifecycle.
A well-studied BP evaluation task concerns Key
Performance Indicators (KPIs) measurement and as-
sessment. KPIs map to certain indicators related the
BP quality. They usually include a metric and a
threshold imposed on it, thus defining the minimum
respective performance level to be sustained. The
metric provides all measurement details needed to
measure different BP quality attributes, which can be
categorised into 4 groups: (a) time, (b) quality, (c)
customer satisfaction, and (d) financial (Caplan and
Norton, 1992). As such, the main goal of an evalua-
tion expert would be to specify suitable KPIs, possi-
bly spanning all four categories, which can be mea-
sured by the BP evaluation system and enable assess-
ing the quality levels of BPs.
In this respect, various KPI measurement systems
were proposed in the past, relying on different tech-
nologies, such as OLAP (Chowdhary et al., 2006)
or SQL query evaluation (Castellanos et al., 2005).
While KPI assessment can be performed extremely
fast in these systems, we believe that the main goal of
a KPI measurement system should not be the KPI as-
sessment speed but to provide assistance to evaluation
experts in defining the most suitable KPIs for a BP.
As such, there is a lack of flexible and user-intuitive
mechanisms via which KPIs can be defined in these
systems. Moreover, such systems are usually special-
purposed as they are designed to serve certain KPI
metric types, such that the introduction of a new met-
ric can require re-engineering the underlying system
database. Finally, they do not employ sophisticated
information integration mechanisms to integrate any
kind of information source, even external ones.
The latter issue is critical in the context of BP as a
Service (BPaaS), i.e., BPs that are moved to the cloud.
Such a migration is becoming a trend nowadays due
to the great advantages that cloud computing brings
about, such as reduced cost and elasticity. To this end,
support for this migration is greatly needed. This sup-
Kritikos, K., Plexousakis, D. and Woitsch, R.
Towards Semantic KPI Measurement.
DOI: 10.5220/0006238000910102
In Proceedings of the 7th International Conference on Cloud Computing and Services Science (CLOSER 2017), pages 63-74
ISBN: 978-989-758-243-1
Copyright © 2017 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
63
port can be realised in the form of a BPaaS manage-
ment system, able to manage the whole lifecycle of a
BPaaS.
As indicated in (Woitsch et al., 2015), an archi-
tecture to support the BPaaS management is quite
sophisticated, involving different environments and
components that can be hosted in different virtual ma-
chines (VMs). As such, to enable a BP’s measure-
ment, the system would have to collect and integrate
information coming from many of these components.
Even external information might be required, out of
the control of the BP management platform, as in the
case of platform as a service (PaaS) services.
To realise the vision of a BPaaS, enabling the flex-
ible BP allocation and execution in the cloud, as well
as address the aforementioned drawbacks, this paper
endorses the usage of Linked Data (LD) technology
to support the KPI analysis of BPaaS. This technol-
ogy is selected based on the following reasons: (a)
allows performing inferencing tasks to deduce added-
value analysis information; (b) enables integrating in-
formation across disparate information sources, even
in unforeseen ways; (c) LD are expressed via ontolo-
gies which are closer to human conceptualisation.
The information integration task is assisted by in-
troducing two ontologies: (a) a dependency ontology
capturing the dependencies between BPaaS compo-
nents, across different abstraction levels (BP, software
and infrastructure), and their state; (b) a KPI ontology
as an extension of OWL-Q (Kritikos and Plexousakis,
2006) enabling the complete KPI specification. The
first ontology constitutes the major integration point
for information coming from different systems, en-
abling its suitable correlation for supporting KPI anal-
ysis. The second ontology enables formally specify-
ing how KPIs can be measured over which BPaaS hi-
erarchy components. As such, via introducing KPI
metric hierarchies that span the whole BPaaS hierar-
chy, the measurability of KPIs is guaranteed.
By building on these two ontologies, an innovative
KPI measurement system has been developed, able
to integrate information from many parts of a BPaaS
management system and offer two KPI analysis ca-
pabilities: KPI measurement and drill-down. The
second capability relies on connecting different KPIs
at both business and technical levels which enables
performing root cause analysis over a high-level KPI
violation. The proposed system enables the on-the-
fly KPI metric formula specification and assessment,
provided that the formula’s correlation to a certain
context is given. This showcases the great flexibil-
ity in KPI measurement offered that greatly assists in
the best possible exploration of the KPI metric space.
The rest of this paper is structured as follows. Sec-
tion 2 reviews the related work. Section 3 offers back-
ground information necessary to better understand the
main paper contribution. Section 4 analyses the two
ontologies proposed. Section 5 provides the proposed
system architecture and exemplifies the way KPI anal-
ysis is performed. Finally, Section 6 concludes the
paper and draws directions for further research.
2 RELATED WORK
Based on the main paper contributions, related work
spans KPI & dependency modelling and KPI analysis.
Thus, its analysis is split in 3 sub-sections.
2.1 KPI Meta-Models
As KPI modelling is a pre-requisite for KPI assess-
ment, a great amount of research work was devoted
in producing KPI meta-models, languages and on-
tologies, especially as currently there is a lack of
standardised BP languages that cover appropriately
the BP context perspective (including goal-based and
measurement information aspects) [evaluation].
To evaluate the related work in KPI modelling, we
rely on a systematic approach which considers a set of
comparison criteria, it summarises the comparison in
the form of an evaluation table, where rows map to
the related work approaches, columns to the criteria
and cells to the performance of an approach over a
certain criterion, and then includes a discussion over
the presented evaluation results.
The comparison criteria considered were the fol-
lowing: (a) KPI coverage: how well the notion of a
KPI is covered; (b) metric formulas: computation for-
mulas are provided supporting the KPI metric mea-
surement; (c) measurability: other aspects comple-
menting metric specification are needed to cover all
measurement details (e.g., units, measured objects);
(d) goal coverage: connecting KPIs to goals enables
to assess whether operational or even tactical goals
are satisfied by performing goal analysis; (e) seman-
tics: if the meta-model / language is semantic or al-
lows semantic annotations. Semantics enables for-
mal reasoning and reaching better evaluation accu-
racy levels; (f) information sources: ability of the lan-
guage to exploit both internal and external informa-
tion sources; (g) measurement origin: the language
ability to cover measurements and explicate their ori-
gin (probes, sensors, or humans); (h) level: the levels
covered (BP, SE - service, Inf - Infrastructure).
Based on the table evaluation results, we can see
that only our ontology scores well over all criteria, it
has better performance for almost all of them and can
CLOSER 2017 - 7th International Conference on Cloud Computing and Services Science
64
Table 1: Comparison table over KPI modelling work.
Work KPI Metric Measur. Goal Semantics Inf. Meas. Level
Cov. Formulas Cov. Sources Origin
(Wetzstein et al., 2008a) moderate yes moderate no no internal probes BP, SE
(Motta et al., 2007) low no low yes no internal - BP, SE
(Pierantonio et al., 2015) good yes low yes no internal - BP
(Friedenstab et al., 2012) good yes moderate no no internal probes BP
(Frank et al., 2008) moderate no low yes no internal probes BP, Inf
(Gonz
´
alez et al., 2009) low yes low no no internal probes BP
(del R
´
ıo-Ortega et al., 2016) moderate yes good yes yes internal probes BP
(Costello and Malloy, 2008) low no low no yes internal probes BP
(Liu et al., 2010) low yes low no no both probes BP, Inf
OWL-Q KPI Extension good yes excellent yes yes both all BP, SE, Inf
be considered as the most prominent. The only mod-
elling work close to ours is the one in (del R
´
ıo-Ortega
et al., 2016). However, that work does not cover
all levels, does not correlate measurements to human
sources, exploits only internal information sources
and provides a moderate KPI coverage. In addition,
it does not directly model the notion of a metric but
intermixes it with that of an indicator. In our opinion,
this is wrong, especially when the latter notion is re-
used in the context of KPI computation formulas, as it
maps to a metric condition and not to the metric itself
which represents all appropriate measurement details
to enable the KPI computation. Please note, though,
that the metric formula definition in that approach is
quite interesting as it involves a kind of restricted nat-
ural language form. This might be more user-intuitive
but clarity and comprehensiveness might be lost when
recursive composite metric formulas need to be spec-
ified. A more mathematical form might have been
more appropriate. This is actually an issue that we
currently explore.
2.2 Dependency Meta-Models
Dependency modelling is considered as a pre-
requisite for system monitoring and adaptation. With-
out such knowledge, both monitoring can be quite
limited, covering mainly low abstraction levels as
propagation to higher levels is prohibited, as well as
respective adaptation capabilities.
By following the analysis approach in the previous
section, we have come up with the following evalua-
tion criteria: (a) abstraction level: which levels (de-
noted as BE, SE, Inf) in the BPaaS hierarchy are cov-
ered; (b) formalism: the dependency model formal-
ism used; (c) runtime: whether the dependency model
covers a dynamic or just a static system view. Dy-
namic views enable to cover the system evolution and
provide support for realising monitoring and adapta-
tion mechanisms; (d) detail level: how well the com-
ponent dependencies are specified.
The respective work, apart from ours, encoded
in the table is the following: (a) SEE (Seedorf and
Schader, 2011), (b) GRU (Gruschke, 1998), (c) CUI
(Cui and Nahrstedt, 2001), (d) HASS (Hasselmeyer,
2001), (e) TOSCA
1
and (f) CAMEL
2
.
Table 2: Evaluation table over dependency modelling work.
Work Abst. Formal. Runtime Detail
Level Level
SEE BP, SE ontology no good
GRU SE, INF graph yes low
CUI SE, INF graph yes mod.
HASS SE, INF graph yes good
TOSCA SE, INF DSL no good
CAMEL SE, INF DSL yes good
Ours all ontology yes good
The comparison table results clearly show that
our ontology covers all possible levels, does capture
runtime information and includes a good informa-
tion level for the dependencies captured. It is thus
better than all other work. Sole competitors are the
approaches in (Seedorf and Schader, 2011; Rossini
et al., 2015) which do not cover all BPaaS hierar-
chy levels. Moreover, the approach in (Seedorf and
Schader, 2011) does not capture runtime information,
while CAMEL does not rely on semantics. We should
state that: (a) an ontology-based approach is essen-
tial to allow a better integration of dependency infor-
mation from different information sources as well as
interesting inferencing over this information; (b) the
good dependency detail level in some modelling ap-
proaches constitutes a place for improvement.
2.3 KPI Analysis Systems
Various KPI analysis frameworks have been proposed
employing techniques that mainly support KPI evalu-
ation while in some cases KPI drill-down is also sup-
1
http://docs.oasis-open.org/tosca/TOSCA/v1.
0/TOSCA-v1.0.html
2
www.camel-dsl.org
Towards Semantic KPI Measurement
65
ported. Most techniques focus on appropriately struc-
turing the underlying database to support KPI analy-
sis. In this sense, they employ relational or semantic
dbs or data warehouses.
By following the same approach as in previous
subsections, we have compiled the next evaluation
criteria: (a) analysis types: which KPI analysis kinds
are supported; (b) db type: type of db used to store
the information needed for KPI analysis; (c) evalua-
tion technique: the technique used to measure KPIs;
(d) drill-down technique: the technique used for KPI
drill-down; (e) evaluation flexibility: system flexibil-
ity in the exploration of the possible metric space; (f)
level: the BPaaS hierarchy levels supported.
From the evaluation results of Table 3, we see that
semantic dbs are do considered in more than half of
the systems, signifying that their added-value is being
recognised in terms of better linking information and
enabling various forms of reasoning. We also see that
almost half of the systems focus only on KPI evalua-
tion. The approaches supporting KPI drill-down ex-
ploit two main techniques: decision trees and com-
bination of metric & KPI hierarchies. The first tech-
nique is suitable when there are measurability gaps
(disconnected metric trees) to be filled-in. The sec-
ond is suitable when connections between KPIs and
metrics exist such that we can go down to more tech-
nical KPIs and then continue from there by exploring
the respective metric hierarchies involved.
A great variation in evaluation techniques can be
seen, from SQL queries, OLAP and event-based met-
ric formula calculation to WSML rules and SPARQL
queries. We believe, however, that SPARQL queries
can be more expressive, even with respect to seman-
tic rules, as they: (a) allow different ways to link
the underlying semantic information; (b) have simi-
lar grouping and aggregation capabilities with SQL
queries; (c) they work on the conceptual level which
is more close to actual human conception.
Concerning evaluation flexibility, our system
seems to be one step ahead from the work in (Wet-
zstein et al., 2008b; Chowdhary et al., 2006; Diaman-
tini et al., 2014) as it does not only allow to map
human-based formulations of metric formulas into
SPARQL queries but also to play around with the met-
ric and condition context. Combined also by the re-
spective KPI ontology capabilities, it can also support
exploiting various information sources, like metrics,
service properties and external ones, thus enabling a
better exploration of the metric space. In this respect,
our approach is more complete and user-intuitive with
respect to the other two systems.
Finally, We can see from the evaluation results that
only our system is able to cover all levels. In fact only
three our of seven systems do recognise the necessity
to cover more than one level in the BPaaS hierarchy.
3 BACKGROUND
This section shortly analyses OWL-Q, as it is the
basis for the KPI ontology proposed. OWL-Q is a
prominent (Kritikos et al., 2013) non-functional ser-
vice specification ontology that captures all possible
measurability aspects. Each aspect is covered by a re-
spective OWL-Q facet. OWL-Q is also accompanied
by semantic rules enabling two semantic reasoning
types: (a) semantic OWL-Q model validation based
on the domain semantics; (b) added-value knowl-
edge generation in the form of term equivalence facts.
OWL-Q currently comprises 6 main facets which are
now analysed by focusing more on those facets that
are more relevant to this paper’s work.
The core facet enables specifying generic con-
cepts and properties, such as Schedule and name. Cat-
egory is one important facet concept, enabling con-
structing hierarchies of categories, i.e., partitions of
this and other element types, like quality metrics and
attributes. As such, this concept can assist in spec-
ifying structured quality models (see KPI categories
in Section 1) which can be re-used in the context of
non-functional capability and KPI description.
The attribute, unit and value type facets enable
specifying respective attribute, unit and value type el-
ements. Attributes (e.g., utilisation) represent prop-
erties that can be measured by metrics. Units can be
derived (e.g., bytes/sec), single (e.g., sec) or dimen-
sionless (e.g., percentage). Value types represent the
domain of values for metrics. As such, they can be
used to validate whether measurements or thresholds
in metric conditions are correct by checking whether
they are included in them.
The metric facet enables describing how attributes
can be measured via the conceptualisation of the Met-
ric concept. Metrics can be raw (e.g., uptime) or com-
posite (e.g., availability). Raw metrics are computed
from sensors or measurement directives posed over
service instrumentation systems. Composite metrics
are computed from formulas, i.e., function applica-
tions over a list of arguments, where an argument can
be a metric, attribute, service property or another for-
mula. Any metric can be related to respective contexts
detailing its measurement frequency and window.
The specification facet enables describing non-
functional specifications as sets of respective capa-
bilities. Each capability is expressed as a constraint
which can be either a logical combination of other
constraints (i.e., a CompositeConstraint) or a simple
CLOSER 2017 - 7th International Conference on Cloud Computing and Services Science
66
Table 3: Evaluation table over KPI analysis work.
Work Analysis DB Type Evaluation Drill-Down Evaluation Level
Types Technique Technique Flexibility
(Castellanos et al., 2005) all relational SQL queries decision trees low BP
(Wetzstein et al., 2009) all relational formula comp. decision trees low BP, SE
(Costello and Malloy, 2008) evaluation semantic formula comp. - low BP
(Wetzstein et al., 2008b) evaluation semantic WSML rules - moderate BP, SE
(Chowdhary et al., 2006) evaluation warehouse OLAP - moderate BP
(Diamantini et al., 2014) all semantic SPARQL queries KPI-based moderate BP
Our Framework all semantic SPARQL queries metric/KPI-based good all
condition over a metric (i.e., a SimpleConstraint. A
metric condition is associated to two different con-
texts: the metric and condition ones. The latter ex-
plicates which object (e.g., service or service input) is
measured and the way the condition should be evalu-
ated over this object’s instances.
4 KPI AND DEPENDENCY
ONTOLOGIES
To enable performing any KPI analysis kind, there
is a need to provide meta-models that structure and
link respective information on which the analysis re-
lies. As such, in the context of BPaaS KPI analysis,
we have developed two main ontologies: (a) the de-
pendency ontology covering BPaaS dependency mod-
els; (b) the KPI ontology covering KPI modelling.
These ontologies are interlinked in one main connec-
tion point, the actual BPaaS element measured within
the BPaaS hierarchy. Then, based on the BPaaS de-
pendency model and the interconnections between
different BPaaS elements, measurement propagation
hierarchies to cover measurability gaps and the dis-
covery of root causes for KPI violations can be en-
abled.
4.1 KPI Ontology
As OWL-Q completely covers the specification of
QoS profiles and SLAs, it was decided to extend it to
cover the KPI specification. This OWL-Q extension
builds upon OWL-Q constructs only a minimum but
sufficient number of relevant new parts. The current
application of this extension over the CloudSocket
project
3
use cases shows that OWL-Q can model all
KPIs needed. This major evaluation step validates the
design of this OWL-Q extension.
Figure 1 depicts the KPI extension, where the grey
colour denotes core OWL-Q concepts, blue metric-
related concepts, green specification-related concepts,
and yellow a concept from the Dependency ontology,
3
http://www.cloudsocket.eu
while red denotes the new KPI extension concepts
that maps to a sub-facet of the specification one.
A KPI represents an indicator of whether BP per-
formance is satisfactory, problematic or erroneous
mapping to 3 states, captured by a warning and vi-
olation threshold. Performance for positively mono-
tonic metrics is satisfactory when is above the warn-
ing threshold, problematic when is between the warn-
ing and violation ones, and erroneous when is below
the violation one. For a negatively monotonic metric,
the order between warning and violation thresholds is
reversed and the state mapping is symmetric.
A KPI has been modelled as a sub-class of simple
constraint that carries extra information. As a sim-
ple constraint already includes a reference to a metric
and (violation) threshold, this extra information spans
a human-oriented description (for human consump-
tion), a validity period and the warning threshold.
While OWL-Q more or less fully covers the con-
ceptualisation of a metric, it was extended to address
the major issue of external information access via in-
corporating such information in metric formulas. By
considering that all modern information sources are
available in form of REST APIs or database end-
points, this extension was realised by introducing the
Query and APICall as sub-classes of Argument. As
such, instances of these classes can be directly used
as input arguments in metric formulas. A Query spec-
ifies in an implementation-independent way the re-
quired information to connect and query a db span-
ning: (a) the db’s connection URL; (b) the query lan-
guage; (c) the actual query; (d) the db type.
An APICall includes all information needed to
call the API and retrieve back the result, spanning: (a)
the API URL; (b) values to all input parameters for the
call; (c) input information encoding; (d) output format
(e.g., XML or JSON); (e) a JSON or XML-like script
(e.g., in XPath) to operate over the output returned.
It might be imperative in certain cases (e.g., cus-
tomer satisfaction metrics) to enable humans to man-
ually provide measurements in the system such that
the measurement-to-user linkage should be modelled.
When connected to certain aspects like human trust
and reliability, such linkage can enable reasoning over
Towards Semantic KPI Measurement
67
Category
Formula
Function
-scheduleType
-start
-end
-repetition
-interval
Schedule
-magnitude : Double
Unit
ValueType
Argument
MetricContext
Metric
Value
OWLList
CompositeMetric
-canBeManual
RawMetric
-timestamp
-value
Measurement
-accessModel
-accessURI
-directiveType
-timeout
MeasurementDirective
-level
Attribute
-downloadScript
-installScript
-runScript
-stopScript
Configuration
-measurementSize
-measurementTime
-timeSize
-windowType
-windowSizeType
Window
-accessModel
-accessURI
Sensor
ArgumentList MetricList
composingMetricList
next
argumentList
function
formula
window
sensor
configuration
directive
schedule
metric
context
subCategory
containsValue
unit
measuredBy
valueType
-validity
-transactionProtocol
-authenticationProtocol
Specification
Constraint
ComplexConstraint
-secondArgument
SimpleConstraint
OperatorBinary
Unary
N_Ary
ComparisonOperator
OptimisationOperator
LogicalOperator
-boundElementURI
-quantifierType
-isRelative
-minQuantifier
-maxQuantifier
ConstraintContext
-serviceURI
Service
ServiceProperty
containsConstraint
conditionContext
constraint
operator
logicalOperator
serviceComponents
property
firstArgument
-name
-description
-validityPeriod
-warningThreshold
KPI
-query
-connectionURL
-dbType
-language
Query
User
suppliedBy
child
metricContext
-trend
-isWarning
-isViolation
KPIAssessment
service
operator
-URL
-values
-inputEncoding
-outputEncoding
-script
APICall
-name
-description
-level
Goal
Functional
Strategic
NonFunctional
Operational
Tactical
Contribution
from
to
AND
OR
Figure 1: OWL-Q KPI Extension.
measurements and their propagation to establish a so-
called trust level over them and a more suitable way
to aggregate them. This was accommodated in OWL-
Q by not only associating a measurement to a specific
sensor or directive but also to a human resource that
might produce this measurement.
To enable a drill-down from higher- to lower-
level KPIs to support root-cause analysis, we asso-
ciate KPIs to each other via a child relation. This
relation must conform to the respective relation be-
tween the metrics of the involved KPIs (i.e., the par-
ent KPI’s metric should be a parent metric of the child
KPI metric). For instance, a KPI for service response
time could be related to KPIs mapping to the service
execution time and corresponding network latency.
During KPI assessment, we are interested in
checking also other information, such as the value
trend with respect to the previous assessments, by
performing different analysis kinds. For instance,
we could assess whether the BPaaS performance gets
gradually reduced from the very beginning. As such,
to also make a connection to the original OWL-Q
concept called Measurement, specifying a measure-
ment’s value and its timestamp, a new sub-class was
created, called KPIAssessment, including information
parts focusing on covering the value trend and the KPI
violation kind (warning or fatal) occurred.
A KPI should be connected to a business goal that
must be satisfied to be used as an instrument to assess
the respective goal’s achievement. Such a linkage can
also enable performing goal-based analysis in order
to reach interesting conclusions related, e.g., to the
satisfaction of strategic goals from operational ones.
As such, OWL-Q was further extended to spec-
ify goals and their linking to KPIs. First, the Goal
concept, representing any goal kind, as well as re-
spective sub-concepts mapping to strategic, tactical,
operational, functional and non-functional goals were
introduced. Any goal was given a name, description
and application level, while operational goals were as-
sociated to the processes used to satisfy them (another
connection point with Dependency ontology). Goals
were also linked via AND/OR self-relations or con-
tribution relations to enable forming goal hierarchies
from strategic to operational goals. Contribution rela-
tions were modelled via the Contribution class which
CLOSER 2017 - 7th International Conference on Cloud Computing and Services Science
68
links a goal with another goal or KPI and is mapped
to a specific level of contribution.
4.2 Dependency Ontology
To perform various analysis types over a certain sys-
tem, it is critical to model the evolution of the sys-
tem dependency model. Such a model reveals what
are the system components and how they are inter-
connected along with the interconnection direction.
Such a direction can indicate the way faults and mea-
surements can be propagated from lower to higher ab-
straction levels. The opposite direction enables per-
forming root cause analysis, i.e., from a current, issue
at a high-level component down to the actual compo-
nent to blame in a lower-level.
The proposed Dependency ontology covers both
deployment and state information about all compo-
nents in a BPaaS hierarchy. It extensively captures
many information aspects making it suitable for many
different kinds of BPaaS analysis, including: (a) KPI
analysis; (b) (semantic) process mining (De Medeiros
et al., 2007), as (semantic) I/O information for tasks
and workflows is covered; (c) best BPaaS deployment
analysis (Kritikos et al., 2016) as all possible deploy-
ment information across all levels is covered.
The Dependency Ontology, depicted in Figure
2, follows the well-known type-instance pattern, en-
abling the capturing of both the allocation decisions
made as well as the whole BPaaS allocation history
and evolution. The proposed ontology also covers
organisational information. In particular, the Ten-
ant concept was introduced to model an organisation
which is also associated to a User set. A tenant can
be a Broker, offering a BPaaS, a Customer, purchas-
ing a BPaaS, or a Provider, offering a cloud service
supporting the BPaaS execution. Customer organisa-
tions can be SMEs, start-ups or big companies (string
type enumeration denotes them).
The ontology analysis follows a top-down ap-
proach from the type to the instance level. At the type
level, the top concept represents the BPaaS which is
associated to an owner and an executable Workflow to
be run in the cloud. The latter is related to its main
Tasks. Tasks can have input and output Variables and
are related to a specific user or role that can be as-
signed to them. A Task can be further classified into a
ManualTask (performed by human workers), Script-
Task (performed automatically via a script) and Ser-
viceTask (performed automatically by calling a Soft-
ware as a Service (SaaS))
A BPaaS corresponds to an allocated executable
workflow. This means that: (a) a workflow can be
shared by many BPaaS; (b) within one BPaaS, a spe-
cific set of allocations can be performed over a work-
flow. This resulted in modelling of the Allocation con-
cept to represent an allocation decision and link it to
a certain BPaaS and workflow.
Each allocation maps a service task to a SaaS, ei-
ther an ExternalSaaS or a (internal) ServiceCompo-
nent. In case of a ServiceComponent, the allocation is
also related to an Infrastructure as a Service (IaaS). A
IaaS is characterised by the number of cores and the
main memory & storage size properties.
At the instance level, BPaaSInstance represents an
actual instance of a BPaaS associated with the Cus-
tomer that has purchased it, its actual cost and the
DeployedWorkflow. The latter represents the BPaaS
workflow deployed in the context of a customer upon
successful purchasing. The instances of this workflow
are then associated to: (a) the instances of tasks (Task-
Instance) created; (b) its start and end time; (c) its re-
sulting state (“SUCCESS” or “ERROR”); (d) the user
that has initiated it; (e) the adaptations performed on it
to keep up with the SLOs promised. Instances of tasks
of this workflow are associated to similar information
which includes the user (if exists) executing them and
their generated input/output VariableInstances. The
latter map to the actual Variable concerned and pos-
sess the respective values generated.
Two types of concrete allocations are modelled:
(a) from a deployed workflow task to a SaaSInstance
realising its functionality; (b) from internal SaaSIn-
stance to the IaaSInstance hosting it. Both a SaaS
and IaaS instance are sub-classes of ServiceInstance,
encompassing their common features mapping to: the
service’s endpoint and its physical & cloud location.
Physical locations are captured via the FAO (United
Nations Food and Agriculture Organisation) geopolit-
ical ontology
4
. CloudLocations are used to structure
arbitrary hierarchies of cloud locations to cover the
hierarchy diversity across different cloud providers.
The most usual BPaaS adaptation types across the
literature have been modelled: service replacement
and scaling ones. Any Adaptation is associated to its
start and end time, its final state and the adaptation
rule triggered. A ServiceReplacement is associated to
the service instance being substituted and the service
instance substituting it.
Any scaling maps to the IaaS to be scaled. 2
main scaling kinds are covered: (a) HorizontalScal-
ing where we specify one or more service compo-
nents hosted by the IaaS to be scaled plus the amount
of instances to be generated or removed; (b) Verti-
calScaling where we indicate the increase or decrease
amount of respective IaaS characteristic(s).
4
http://aims.fao.org/aos/geopolitical.owl
Towards Semantic KPI Measurement
69
Figure 2: Dependency ontology UML class diagram.
5 KPI ANALYSIS SYSTEM
5.1 Architecture
The architecture of the KPI analysis system, depicted
in Figure 3, follows a Service-Oriented Architecture
and the known three-level implementation pattern of
UI-business logic-database. This system comprises
ten main components. The Hybrid Dashboard is the
main entry point to the system from which respective
analysis tasks can be performed and then represented
according to suitable visualisation metaphors.
The Conceptual Analytics Service is a REST ser-
vice offering the two main KPI analysis capabilities.
As such, it can be exploited by external components
to programmatically deliver these capabilities.
The Conceptual Analytics Engine orchestrates the
way the KPI analysis and drill-down can be per-
formed by invoking three main components: (a) the
Metric Specification Extractor which extracts the KPI
metric definition from the Semantic KB; (b) the Met-
ric Formula Extender which expands the extracted
Figure 3: Architecture of the KPI Analysis System.
metric’s formula based on the metric’s derivation hier-
archy; (c) the SPARQL Transformer which transforms
CLOSER 2017 - 7th International Conference on Cloud Computing and Services Science
70
the expanded metric formula into a SPARQL
5
query
to be assessed over the Semantic KB.
The Semantic KB is a semantic Triple Store en-
abling the management and storage of semantic infor-
mation, structured based on the two ontologies pro-
posed. To address the heterogeneity of different triple
store implementations and their exchange, a Seman-
tic KB Service was developed on top of this KB to
offer a RESTful interface enabling LD management
via methods that facilitate issuing SPARQL queries,
inserting as well as updating RDF graphs.
The Harvester populates the Semantic KB by us-
ing the Semantic KB Service. This component first
obtains the needed information from disparate infor-
mation sources within the BPaaS management proto-
type and then semantically enhances, links and for-
mulates it based on the two proposed ontologies pro-
posed. The population is performed periodically to
not overwhelm the system but more frequently with
respect to the way KPI measurements are assessed.
Finally, the Meta-Model Repository includes ba-
sic information about BPs like their models and an-
notations to be exploited for visualisation and infor-
mation harvesting reasons. In the current implemen-
tation prototype, this component is shared between
the BPaaS Design and Evaluation environments. This
witnesses the closeness between the two BP lifecycle
activities and the respective level of cooperation that
has to be established between them.
5.2 KPI Analysis
No matter what is the KPI analysis form, the core KPI
metric evaluation functionality needs to be realised.
As such, we have selected a semantic approach for
this realisation for two main reasons: (a) the eval-
uation accuracy is higher; (b) semantic linking en-
ables exploring the actual space of possible KPI met-
ric computation formulas. In fact, the latter reflects
the current KPI evaluation practice where there might
be some KPI metrics that can be fixed in advance
(e.g., cross-domain metrics) while the rest of the met-
rics must be computed based on the knowledge and
expertise of the (BP performance) evaluator.
In contrast to other forms of measurement stor-
age and aggregation, the semantic linking enables the
rich connection between different information aspects
to facilitate the metric formula construction. On the
other hand, measurement system alternatives, such as
Time Series Data Bases, require an apriori design of
the measurement space and do not allow advanced
forms of information linking and aggregation.
5
https://www.w3.org/TR/rdf-sparql-query/
By relying on a semantic approach and adopting
LD technology, the most intuitive way to express met-
ric formulas is via SPARQL queries. The main dif-
ficulty lies on the fact that SPARQL queries require
deep knowledge about LD technology and great ex-
pertise in SPARQL query modelling which might not
be possessed by a BP performance evaluation expert.
This expert would rather prefer to mathematically
specify the metric formula in a simplified language.
This observation has led to the need to transform the
KPI metric specification, as specified in the KPI on-
tology, into a SPARQL query specification. By re-
lying on an user-intuitive OWL-Q editor and the fact
that ontologies represent human conceptualisations of
a domain, the expert can more naturally specify the
metric formula. This obstacle could be further over-
passed by introducing a domain-specific language for
pure mathematical metric formula expressions.
This metric formula to SPARQL query transfor-
mation included a set of specific hurdles that had to
be overcome. First, it relies on the metric kind. We
distinguish between two metric kinds: (a) customer-
specific, pertaining to a certain BPaaS instance pur-
chased by the customer; (b) broker-specific, pertain-
ing to an overall performance of the BPaaS offered.
Customer-specific metrics have as a measurement
space all the measurements produced for the cus-
tomer’s BPaaS instance, while broker-specific metrics
have a broader measurement space spanning all mea-
surements over all instances of a BPaaS.
Second, the transformation is hardened by two
main factors: (a) it should not only consider the met-
ric itself (i.e., the actual computation) but also the
metric and condition context, which signifies that all
this information should be linked together to obtain
the right set of measurements to be aggregated; (b) the
dynamic evaluation kind envisioned, where the ex-
pert can play with formulas, metric kinds, evaluation
(schedule & windows) and history periods, does not
enable storing the measurements in the physical stor-
age once they are derived. As such, the way lower-
level KPI metrics can be produced needs to be taken
into account when attempting to compute a high-level
KPI metric. This also signifies that we might need
to go down even to the level of resource or low-level
metrics for which measurements are already produced
to derive the measurements for a high-level KPI.
By considering the above two issues, a particular
transformation algorithm has been developed which,
depending on the input provided, attempts to con-
struct dynamically the SPARQL query to be issued
for deriving the respective metric measurement. The
pseudo-code of this algorithm is shown in Listing 1.
Towards Semantic KPI Measurement
71
Listing 1: Transformation & Drill-Down Algorithms
Pseudo-code.
R e s u l t S e t e v a lKPI { M e t r i c m, O b j e c t o b j e c t , BPaaS b p aas ,
Da teTim e s t a r t , DateT im e end , S t r i n g c u s t I D }{
M e t r i c F o r m u l a mf = e xpand F o r mula (m. f o r m u l a ) ;
L i s t <S t r i n g > v a r s = g e t V a r s ( mf ) ;
S t r i n g c l a u s e = g e t C l a u s e ( mf ) ;
S t r i n g qu e r y = c r e a t e Q u e r y ( v a r s , c l a u s e , o b j e c t , b p aas ,
s t a r t , end ,m . m e t r i c C o n t e x t . s c h e d u l e , c u s t I d ) ;
r e t u r n r u nQuer y ( q u e r y ) ;
}
S t r i n g c r e a t e Q u e r y ( L i s t <S t r i n g > vars , S t r i n g c l a u s e ,
O b j e c t o b j , BPaaS bpa a s , D ateTi me s t a r t , Dat eT ime end ,
S c h e d u l e s c h e d u l e , S t r i n g c u s t o m e r I d ) {
S t r i n g qu e r y = i n s e r t P r e f i x e s ( ) ;
q u e r y += a p p l y C l a u s e ( c l a u s e , g e t B r o k e r G r a p h ( ) ) ;
q u e r y += c r e a t e M e a s u r e m e n t T r i p l e s ( v a r s , o b j . URI ) ;
q u e r y += c r e a t e I n t e r L i n k ( bpaa s , o b j e c t , c u s t o m e r I d ) ;
q u e r y += a p p l y F i l t e r s ( s t a r t , end , v a r s ) ;
q u e r y += ap p l y G r o u p i n g ( s c h e d u l e , v a r s ) ;
r e t u r n q u e r y ;
}
H a s h t a b l e<M e t r i c , R e s u l t S e t> d r i l l D o w n ( M e t r i c m,
O b j e c t o b j e c t , BPaaS bp a a s , D ateTi me s t a r t ,
Da teTim e end , S t r i n g c u s t I d ) {
M e t r i c T r e e mt = e x p a n d F o r m u l a I n T r e e (m. f o r m u l a ) ;
Set<Metr icN ode> m e t r i c s = g e t L e a v e s ( mt ) ;
H a s h t a b l e<M e t r i c , R e s u l t S e t> r e s u l t s =
new Ha s h t a b l e<M e t r i c , R e s u l t S e t > ();
w h i l e ( ! m e t r i c s . i sEmp t y ( ) ) {
f o r ( Me t ricN o d e mn: m e t r i c s {
i f (mn . i s L e a f ( ) ) r e s u l t s . u n i o n ( e v a l K P I ( mn . m e t r i c ,
o b j e c t , b p aas , s t a r t , end , c u s t I d ) ) ;
e l s e r e s u l t s . u n i o n ( m eas ure KPI ( mn , o b j e c t , s t a r t ,
end , c u s t I d , r e s u l t s ) ) ;
}
m e t r i c s = g e t P a r e n t s ( m e t r i c s ) ;
}
r e t u r n r e s u l t s ;
}
The algorithm (see evaluateKPI method) com-
prises five steps: (a) metric formula expansion which
involves the recursive substitution of component met-
rics, for which measurements are not stored in the Se-
mantic KB, with the derivation formula of these met-
rics; (b) derivation of query variables from those met-
rics, i.e., the leaf metrics, in the expanded formula for
which measurements have been stored; (c) production
of the (SPARQL) select clause from the expanded for-
mula; (d) production of the whole SPARQL query; (e)
evaluation of the SPARQL query over the Semantic
KB.
The SPARQL query production (see
createQuery method) includes the execution of
the following steps: (i) the creation of the query
prefixes; (ii) the application of the SELECT & FROM
clauses by also considering the respective LD graph
URI mapping to the individual RDF graph of the
broker from which the relevant information for the
query evaluation can be obtained; (iii) the creation
of triples mapping to the measurements of the leaf
metrics / variables; (iv) the enforcement of the
intelinking between measurements according to the
object being measured, the customer (if given as
input) and the respective BPaaS instances of the
measurements; (v) the application of the filtering
(FILTER clause) over the history period to consider
measurements produced only on that period; (vi) the
application of SPARQL GROUP BY clauses based on
the KPI metric evaluation period, i.e., its schedule.
In order to exemplify the transformation algorithm
and raise its understanding level, we focus on high-
lighting its application on a specific example of a KPI
metric and we especially take a closer look at the re-
spective SPARQL query being generated.
Suppose that we need to measure the aver-
age availability metric AV G
A
for the whole BPaaS
workflow which can be computed from the formula
MEAN(RAW
A
), where RAW
A
represents the instance-
based availability metric for this workflow. Moreover,
further assume that: (i) the availability metric should
be calculated every 1 hour, while the raw availability
one every minute; (ii) the history period is 1 day.
The first step of the transformation algorithm
will expand this formula based on the measur-
ability of its component metrics. In particu-
lar, as RAW
A
is not stored in the Semantic KB,
it is further expanded into its derivation formula
UPT IME
TOTAL OBSERVAT ION T IME
, where UPT IME is a raw
metric and T OTAL OBSERVAT ION T IME is a con-
stant. In this sense, the final expanded formula will
become: MEAN
UPT IME
TOTAL OBSERVAT ION T IME
. From
this formula, the next two algorithm steps will pro-
duce a set of one variable (“?uptime”) and the se-
lect clause (“SELECT (AVG(?uptime / 60) as ?value)
(MAX(?uptime ms ts) as ?date)”). Please note that as
uptime is calculated every second, the total observa-
tion time constant is 60.
The fourth step will focus on generating the actual
SPARQL query which is depicted in Figure 4. This
figure is now explained by focusing over all the steps
involved in the createQuery method and the content
generated by them.
Figure 4: Constructed SPARQL query for the example.
Lines 1-2 indicate the prefixes of the two on-
tologies being exploited mapping to the first query
generation step. Line 3-4 map to the second step
which completely specifies the query SELECT & FROM
clauses.
CLOSER 2017 - 7th International Conference on Cloud Computing and Services Science
72
Lines 5-9 signify a set of triple patterns, generated
by the third step in query creation, linking the uptime
measurement to: (a) the Uptime metric; (b) its actual
value used in the Line 3 formula; (c) the actual date-
Time where this measurement was produced; (d) the
URI of the object measured (a workflow instance in
our case). While these lines guarantee that we operate
over Uptime measurements, they do not provide suit-
able connections to other major information aspects,
such as which BPaaS is actually concerned.
As such, Lines 10-13, mapping to the fourth query
creation step, realise the needed connections from the
object measured to both the BPaaS instance involv-
ing it and the BPaaS under investigation. Line 10
connects the current BPaaS to one of its instances,
while Line 11 links this BPaaS instance to a deployed
workflow. Line 12, currently commented, would link
this instance to a specific client that has purchased it,
in case we deal with a customer-specific metric. Fi-
nally, Line 13 maps the deployed workflow to the ac-
tual workflow instance measured. Lines 11-13 can be
differentiated based on the kind of object being mea-
sured. For instance, if a task instance is measured,
then we need to add another triple pattern indicating
that the workflow instance includes this task instance.
Line 15, currently commented, maps to the fifth
query creation step and provides a SPARQL FILTER
constraint that can restrain the history period under
investigation. In particular, the dateTime of the mea-
surement is mapped to two simple constraints indi-
cating that this dateTime should be greater or equal
to the low bound dateTime of the considered period
and less than or equal to the upper bound dateTime of
this period. The commendation of this line signifies
that the whole evaluation history of the KPI metric is
explored.
Finally, Line 17, generated by the last query cre-
ation step, provides a grouping statement where the
last sub-group maps to the evaluation period of the
KPI metric (per hour). This statement groups first the
results according to the month, then according to the
day and then according to the respective hour.
The derivation of KPI drill-down knowledge is
handled by another algorithm which maps to the
drillDown method in Listing 1. As it can be seen,
this algorithm exploits the transformation one by also
considering the whole derivation tree of the current
KPI metric at hand. The drill-down algorithm steps
are the following: (a) start with the top metric’s
derivation list and expand on it recursively until leaf
metric nodes are reached. This leads to the produc-
tion of a metric (derivation) tree which has as nodes
either formula or metric nodes. The latter nodes map
only to metrics for which measurements exist in the
Semantic KB; (c) compute the needed metric values
according to the SPARQL-based transformation ap-
proach in a bottom-up way; The latter approach is
used only for the leaf metric nodes in the metric
derivation tree. Then, the produced measurements are
propagated up in the tree by considering the respec-
tive formula and metric nodes visited in a level-by-
level manner. Each time a KPI metric node is vis-
ited, its values are produced according to the metric
formula involved and the already produced measure-
ments and they are stored in the hashtable, from met-
rics to measurement result sets, to be returned.
6 CONCLUSIONS
This paper has presented a semantic approach to KPI
measurement which enables the clever and dynamic
exploration of the KPI metric space. This approach
relies on the appropriate definition of the KPI met-
ric, the expansion of its formula and its transforma-
tion into a SPARQL query that is then issued over a
semantic KB. A KPI drill-down capability is also of-
fered by the proposed KPI analysis system which cap-
italises over the KPI measurement algorithm and the
KPI’s hierarchy tree. This paper has also proposed
specific ontologies focusing on the complete KPI def-
inition and the capturing of BPaaS dependency mod-
els. Both ontologies can be exploited to semantically
link information originating from the BPaaS execu-
tion so as to populate the semantic KB and enable
various types of BPaaS analysis over it, apart from
the currently offered one, such as process mining or
best BPaaS deployment discovery.
The following research directions are planned.
First, further validating the two proposed ontologies
from use cases of the CloudSocket project to obtain
suitable feedback for optimising them. Second, eval-
uating the KPI analysis system based on both perfor-
mance and accuracy aspects. Third, realising addi-
tional BPaaS analysis algorithms into the respective
KPI analysis system to transform it into a full-fledged
BPaaS evaluation environment.
ACKNOWLEDGEMENTS
This research has received funding from the European
Community’s Framework Programme for Research
and Innovation HORIZON 2020 (ICT-07-2014) un-
der grant agreement number 644690 (CloudSocket).
Towards Semantic KPI Measurement
73
REFERENCES
Caplan, R. S. and Norton, D. P. (1992). The Balanced
Scorecard Measures that Drive Performance. Har-
vard Business Review, 70(1):281–308.
Castellanos, M., Casati, F., Shan, M.-C., and Dayal, U.
(2005). ibom: A platform for intelligent business
operation management. In ICDE, pages 1084–1095,
Washington, DC, USA. IEEE Computer Society.
Chowdhary, P., Bhaskaran, K., Caswell, N. S., Chang, H.,
Chao, T., Chen, S.-K., Dikun, M., Lei, H., Jeng, J.-
J., Kapoor, S., Lang, C. A., Mihaila, G., Stanoi, I.,
and Zeng, L. (2006). Model Driven Development
for Business Performance Management. IBM Syst. J.,
45(3):587–605.
Costello, C. and Malloy, O. (2008). Building a Process Per-
formance Model for Business Activity Monitoring. In
Wojtkowski, W., Wojtkowski, G., Lang, M., Conboy,
K., and Barry, C., editors, Information Systems Devel-
opment - Challenges in Practice, Theory, and Educa-
tion, pages 237–248. Springer-Verlag.
Cui, Y. and Nahrstedt, K. (2001). QoS-Aware Depen-
dency Management for Component-Based Systems.
In HPDC, page 127. IEEE Computer Society.
De Medeiros, A. K. A., Pedrinaci, C., van der Aalst, W.
M. P., Domingue, J., Song, M., Rozinat, A., Norton,
B., and Cabral, L. (2007). An Outlook on Semantic
Business Process Mining and Monitoring. In OTM,
pages 1244–1255. Springer-Verlag.
del R
´
ıo-Ortega, A., Resinas, M., Dur
´
an, A., and Ruiz-
Cort
´
es, A. (2016). Using templates and linguistic pat-
terns to define process performance indicators. En-
terp. Inf. Syst., 10(2):159–192.
Diamantini, C., Potena, D., Storti, E., and Zhang, H. (2014).
An Ontology-Based Data Exploration Tool for Key
Performance Indicators. In ODBASE, pages 727–744,
Amantea,Italy. Springer-Verlag.
Frank, U., Heise, D., Kattenstroth, H., and Schauer, H.
(2008). Designing and utilising business indicator sys-
tems within enterprise models: Outline of a method.
In MobIS: Modellierung zwischen SOA und Compli-
ance Management, Saarbr
¨
ocken, Germany.
Friedenstab, J.-P., Janiesch, C., Matzner, M., and Muller, O.
(2012). Extending BPMN for Business Activity Mon-
itoring. In HICSS, pages 4158–4167. IEEE Computer
Society.
Gonz
´
alez, O., Casallas, R., and Deridder, D. (2009). MMC-
BPM: A domain-specific language for business pro-
cesses analysis. In BIS, volume 21, pages 157–168,
Poznan, Poland. Springer.
Gruschke, B. (1998). Integrated Event Management: Event
Correlation Using Dependency Graphs. In DSOM.
Hasselmeyer, P. (2001). Managing Dynamic Service
Dependencies. In DSOM, pages 141–150, Nancy,
France. INRIA.
Kritikos, K., Magoutis, K., and Plexousakis, D. (2016). To-
wards Knowledge-Based Assisted IaaS Selection. In
CloudCom, Luxembourg. IEEE Computer Society.
Kritikos, K., Pernici, B., Plebani, P., Cappiello, C., Co-
muzzi, M., Benbernou, S., Brandic, I., Kert
´
esz, A.,
Parkin, M., and Carro, M. (2013). A survey on ser-
vice quality description. ACM Comput. Surv., 46(1):1.
Kritikos, K. and Plexousakis, D. (2006). Semantic QoS
Metric Matching. In ECOWS, pages 265–274. IEEE
Computer Society.
Liu, R., Nigam, A., Jeng, J., Shieh, C., and Wu, F. Y. (2010).
Integrated Modeling of Performance Monitoring with
Business Artifacts. In ICEBE, pages 64–71, Shanghai,
China. IEEE Computer Society.
Motta, G., Pignatelli, G., , and Florio, M. (2007). Perform-
ing Business Process Knowledge Base. In First Inter-
nation Workshop and Summer School on Service Sci-
ence, Heraklion, Greece.
Pierantonio, A., Rosa, G., Silingas, D., Th
¨
onssen, B., and
Woitsch, R. (2015). Metamodeling Architectures for
Business Processes in Organizations. In Proceedings
of the Projects Showcase at STAF, L’Aquila, Italy.
CEUR.
Rossini, A., Kritikos, K., Nikolov, N., Domaschka, J.,
Griesinger, F., Seybold, D., and Romero, D. (2015).
D2.1.3 CloudML Implementation Documentation
(Final version). Paasage project deliverable.
Seedorf, S. and Schader, M. (2011). Towards an Enterprise
Software Component Ontology. In AMCIS. Associa-
tion for Information Systems.
Wetzstein, B., Karastoyanova, D., and Leymann, F. (2008a).
Towards Management of SLA-Aware Business Pro-
cesses Based on Key Performance Indicators. In BP-
MDS, Montpellier, France.
Wetzstein, B., Leitner, P., Rosenberg, F., Brandic, I., Dust-
dar, S., and Leymann, F. (2009). Monitoring and Ana-
lyzing Influential Factors of Business Process Perfor-
mance. In EDOC, pages 118–127. IEEE Press.
Wetzstein, B., Ma, Z., and Leymann, F. (2008b). Towards
Measuring Key Performance Indicators of Semantic
Business Processes. In BIS, page 227238. Springer-
Verlag.
Woitsch, R., Albayrak, M., K
¨
ohn, H., Utz, W., Ferrer, A. J.,
Iranzo, J., Leonforte, A., Gallo, A., Mihnea, V., Pacu-
rar, R., Avasilcai, C., Arama, G., Boca, R., Griesinger,
F., Seybold, D., Domaschka, J., Kritikos, K., and Plex-
ousakis, D. (2015). D4.1 First CloudSocket Archi-
tecture. CloudSocket European Project.
CLOSER 2017 - 7th International Conference on Cloud Computing and Services Science
74