them, a synthesis of two frameworks was constructed:
(a) the framework in (Kritikos et al., 2013), which
was defined according to the service lifecycle; and
(b) the one in (Uriarte et al., 2014), which defines a
complementary set of criteria and focus on description,
model validation capabilities and tool support. The
latter is discussed, while the details of the former can
be found in (Kritikos et al., 2013).
In the sequel, we describe the evaluation criteria
of our comparison, summarise the results in Table 1
and discuss these results. We analyse the following
SLA languages: WSLA (Keller and Ludwig, 2003),
WS-Agreement (Andrieux et al., 2007), WSOL (Tosic
et al., 2003), RBSLA (Paschke, 2005), Linked USDL
(LUA for short) (Pedrinaci et al., 2014), SLALOM
(Correia et al., 2011) and SLAC (Uriarte et al., 2014).
Description refers to (a) the formalism in SLA
description; (b) the coverage of both functional and
non-functional aspects; (c) the re-usability in terms of
SLA constructs to be used across different SLAs; (d)
the ability to express composite SLAs; (e) the cover-
age of the cloud domain (wrt. cloud service types); (f)
price model coverage (schemes & computation model);
(g) dynamicity (i.e., capability to move between ser-
vice levels (SLs) or modify SLOs based on certain
conditions or per request); (h) (model) validation capa-
bilities (i.e., capability to perform syntactic, semantic
and constraint validation of SLAs); and (i) editor sup-
port. For the cloud Coverage, the evaluation considers
whether the SLA language covers all cloud service
types and if it is generic enough (assessed as ’a’ denot-
ing this ability), and whether it can cover all or some
service types by providing respective cloud domain
terms (’y’ means all service types, ’p’ means some).
Non-coverage is denoted by ’n’.
Price model defines if a language: ‘n’: does
not support price models; ‘p’: covers only pricing
schemes; ‘y’: covers also the price computation model.
Dynamicity denotes: (a) ‘n’: if the language does not
cover this aspect; (b) ‘SLO’: if it covers it at the SLO
level; (c) ‘SL’: if it covers it at the SL level enabling to
transition from one SL to another or to modify a SL.
The language evaluation over its validation capabil-
ities can map to multiple values: (a) ‘n’: no validation
capabilities are offered; (b) ‘sy’: syntactic validation
is enabled; (c) ‘se’: semantic validation is enabled; (d)
‘c’: constraint-based validation is enabled.
A language can provide: ‘s’: a domain-specific Ed-
itor; ‘g’: a generic one; ‘n’: no editor. The Discovery
criterion includes: (a) metric definition, which refers
and also define quality metrics; (b) alternatives, which
specifies alternative SLs; (c) soft constraints, which
uses soft-constraints to address over-constrained re-
quirements; (d) matchmaking metric, which supports
metrics explicating the specification matching.
Negotiation. Meta-negotiation refers to the supply
of information to support negotiation establishment;
and negotiability to the ability to indicate in which way
quality terms are negotiable. For Monitoring, an SLA
language should define: (a) the metric provider respon-
sible for the monitoring and (b) the metric schedule
indicating how often the SLO metrics is measured.
Assessment defines: (a) the condition evaluator,
i.e., the party responsible for SLO assessment; (b)
qualifying conditions for SLOs; (c) the obliged party
to deliver an SLO; (d) the SLOassessment schedule;
(e) the validity period in which an SLO is guaranteed;
(g) recovery actions to remedy for SLO violations.
The Settlement with respect to particular situations
enables the definition of: (a) penalties; (b) rewards;
(c) settlement actions. Archive, instead, is concerned
with the ability to specify the SLA’s validity period.
Management. Framework assesses whether an
open- or closed-source management framework has
been built on top of a SLA language. The respective
assessment values are: (a) ‘o’: open-source framework
exists; (b) ‘y’: framework exists but is not open-source;
(c) ‘n’: no framework is available.
Based on the evaluation results in Table 1, no SLA
language scores well in all criteria across all life-cycle
activities. By considering all criteria, we could nomi-
nate WSLA and LUA as the most prominent languages
but they still need to be substantially improved.
Concerning the description activity, SLAC (Uri-
arte et al., 2014) is the best, since it exhibits a good
composability and dynamicity levels. Pricing mod-
els are also partially covered, a domain-specific editor
is also offered and it is the sole language supporting
constraint-based SLA model validation.
As far as matchmaking and negotiation activities
are concerned, WS-A (Andrieux et al., 2007) seems to
be slightly better than the rest, especially with respect
to the second activity. However, matchmaking is not
actually well covered by any language. For monitoring
and assessment, LUA and WSLA seem to be the best
languages, with WSLA being slightly better on the as-
sessment part and the recovery action coverage. More
than half of the languages provide a SLA management
framework. Most of these are open-source, enabling
possible adopters to extend it according to their needs.
Based on the above analysis, no SLA language pre-
vails; so there is a need to either introduce yet another
SLA language or improve an existing one. In this
paper, we take the second direction and combine the
capabilities of the OWL-Q (Kritikos and Plexousakis,
2006) and SLAC (Uriarte et al., 2014) languages to
offer a language agglomeration that advances the state-
of-the-art. The last column in Table 1 depicts the
Semantic SLA for Clouds: Combining SLAC and OWL-Q
405