the previous section as the underlying infrastructure
supporting the same CAS. For this purpose we use
an experimental evaluation that emulates the actual
operation of the CAS in a smaller, and more manage-
able sample of the population under the assumptions
discussed in the following.
4.1 Experiments Description
We designed two experiments for evaluation purposes.
In both experiments we assess the impact of utility
in decision making by comparing the decisions taken
for a fixed set of users when attempting a trip from
a random point in Trento’s map to another also ran-
domly generated destination in Trento. Users have
profiles with different preferences allowing them to
prioritize the available transportation modes (public
transportation or car) and routes in different ways. The
experiments are performed for a set of one thousand
users distributed in three different profile types: work-
ers, students, and pensioners. All trip requests take
place within a fixed interval during the morning of a
working day. Experiment A measures which transporta-
tion mode was selected by each user when a) the utility
model of Section 3 (and in particular, Equation 3) was
used to decide the best option for each user, b) only the
duration of the trip is taken into consideration (short-
est is better), and c) only the cost of the trip is used
(cheaper is better). Experiment B measures the effect
of the bus fare price to the choice between public and
private transportation by users when utility is used for
decision making. For this purpose we reduce the bus
fare in fixed decrements and we measure the selected
transportation mode by each user as before. The setup
for both experiments is described in the following.
4.2 Experimental Setup
In terms of infrastructure for our experimental eval-
uation, we extend the system architecture proposed
in (Andrikopoulos et al., 2014b) and add the necessary
components for generating and driving the resulting
system with a representative load. The resulting sys-
tem is summarized by Fig. 1. More specifically, the
system discussed in (Andrikopoulos et al., 2014b) dis-
tinguishes between a Modeling Environment and a
Runtime Environment. The Modeling Environment,
implemented as an Eclipse Graphical Editor
2
allows
the definition of cells and ensembles (as discussed
in Section 2) as a set of service orchestrations and
choreographies, respectively. WS-BPEL is used for
the former, and the BPEL4Chor language for the latter,
2
Eclipse Graphical Editing Framework:
https://eclipse.org/ gef/
as discussed in (Andrikopoulos et al., 2014b). For
the purposes of our evaluation, we used the Modeling
Environment to design the ensemble and the respec-
tive cells allowing a passenger to inquire the UMS for
traveling options between two points in a city. UMS
replies with a set of route alternatives that include both
public, i.e. buses and possibly walking, and private,
i.e. car driving transportation modes. Decision making
based on different policies, i.e. utility, trip duration, or
cost is also modeled as a cell in the system, allowing
the automation of the experiment.
The execution of the cells takes place in the Ex-
ecution Engine component of the Runtime Environ-
ment, implemented based on the Apache ODE
3
En-
gine, an open source implementation of BPEL. The
Utility Module in Fig. 1 implements the utility model
discussed in Section 3 as a set of Web Services that
are interacting with the Execution Engine through an
Enterprise Service Bus (ESB). More specifically, the
Utility Model services accept a list of route alterna-
tives and a unique entity identifier, representing each
user of the UMS, and return the route alternative list
ordered by their calculated utility for the specific en-
tity. The entity identifiers, together with their profiles
consisting of their preferences with respect to maxi-
mum traveling time, cost, etc. are stored in the Entity
Management System component of the system, also
implemented as a set of Web Services on top of a
database for persistence purposes. The Adaptation
Manager and Monitoring components in Fig. 1 are out
of the scope of this evaluation, and therefore they will
be omitted from the rest of the discussion; here they
are presented for completeness.
The main difference between the system discussed
in (Andrikopoulos et al., 2014b), and the one presented
in Fig. 1 is the addition of a third aspect, that of Load
Generation & Driver. This contains the components
of Entity Generator, responsible for generating enti-
ties and their profiles for experimental use, the Route
Generator, which produces the route alternatives for
the possible trips, and the Load Driver that generates
a load for the system emulating the behavior of the
entities generated by Entity Generator using the routes
produced by the Route Generator. In the following we
discuss how we implemented these latter components,
as well as the steps necessary for preparing and exe-
cuting our experiment. W.r.t. the deployment of the
infrastructure depicted in Figure 1, the components
are distributed in on-premise, private cloud facilities
of both institutions. The Modeling and Load Genera-
tion & Driver components, and Execution engine are
distributed in separate machines in the University of
3
Apache ODE: http://ode.apache.org/
CLOSER 2016 - 6th International Conference on Cloud Computing and Services Science
42