the modelling tool can be integrated with the
tools that produces the transformations.
This paper is organized as follows: Section 2
introduces the main concepts used in the article and
outlines the Lottery SPL; this SPL is used as the
running example. Section 3 outlines the entire
model-driven testing framework. Section 4 describes
the activities for the framework in MDE
development. Section 5 describes the activities in
SPL development. Section 6 summarizes related
works. Finally, Section 7 draws some conclusions
and presents future lines of work.
2 BACKGROUND
Model-Driven Engineering (MDE) considers
models as first-order citizens for software
development, maintenance and evolution through
model transformation (Mens and Van Corp 2006). In
addition to independence between models, Model-
Driven Architecture (MDA, (OMG 2003)) clearly
separates business complexity from implementation
details by defining several software models at
different abstraction levels. MDA defines three
viewpoints of a system: (i) the Computation
Independent Model (CIM), which focuses on the
context and requirements of the system without
considering its structure or processing, (ii) the
Platform Independent Model (PIM), which focuses
on the operational capabilities of a system outside
the context of a specific platform, and (iii) the
Platform Specific Model (PSM), which includes
details relating to the system for a specific platform.
The UML 2.0 Testing Profile (UML-TP) defines
a language for designing, visualizing, specifying,
analyzing, constructing and documenting the
artifacts of test systems. It extends UML 2.0 with
test specific concepts for testing, grouping them into
test architecture, test data, test behaviour and test
time. As a profile, UML-TP seamlessly integrates
into UML. It is based on the UML 2.0 specification
and is defined using the metamodeling approach of
UML(OMG 2005). The test architecture in UML-TP
is the set of concepts to specify the structural aspects
of a test situation (Baker, Dai et al. 2007). It includes
TestContext, which contains the test cases (as
operations) and whose composite structure defines
the test configuration. The test behaviour specifies
the actions and evaluations necessary to evaluate the
test objective, which describes what should be
tested. The test case behaviour is described using the
Behavior concept and can be shown using UML
interaction diagrams, state machines and activity
diagrams. The TestCase specifies one case to test the
system, including what to test it with, the required
input, result and initial conditions. It is a complete
technical specification of how a set of
TestComponents interacts with an SUT to realize a
TestObjective and return a Verdict value (OMG
2005). This work focuses on test cases, whose
behavior is represented by UML sequence diagrams.
Software Product Lines (SPL) are suitable for
development with Model Driven principles: an SPL
is a set of software-intensive systems sharing a
common, managed set of features which satisfy the
specific needs of a particular market segment or
mission and which are developed from a common
set of core assets in a prescribed way(Clements and
Northrop 2001). Therefore, products in a line share a
set of characteristics (commonalities) and differ in a
number of variation points, which represent the
variabilities of the products. Software construction
in SPL contexts involves two levels: (1) Domain
Engineering, referred to the development of the
common features and to the identification of the
variation points; (2) Product Engineering, where
each concrete product is built, what leads to the
inclusion of the commonalities in the products, and
the corresponding adaptation of the variation points.
Thus, the preservation of traceability among
software artifacts is an essential task, both from
Domain to Product engineering, as well as among
the different abstraction levels of each engineering
level.
The way in which variability is managed in SPL
is critical in SPL development. In this work, the
proposal by Pohl et al. (Pohl, Böckle et al. 2005) is
used to manage the variability, defined in their
Orthogonal Variability Model (OVM). In OVM,
variability information is saved in a separate model
containing data about variation points and variants (a
variation point may involve several variants in, for
example, several products). OVM allows the
representation of dependencies between variation
points and variable elements, as well as associations
among variation point and variants with other
software development models (i.e., design artifacts,
components, etc.). Associations between variants
may be requires_V_V and excludes_V_V, depending
on whether they denote that a variation requires or
excludes another variation. In the same way,
associations between a variation and a variation
point may be requires_V_VP or excludes_V_VP,
also to denote whether a variation requires or
excludes the corresponding variation point.
The variants may be related to artifacts of an
arbitrary granularity. Since variants may be related
AN AUTOMATED MODEL-DRIVEN TESTING FRAMEWORK - For Model-Driven Development and Software
Product Lines
113