5 EVALUATION OF RESULTS
In this section, we analyse the results achieved by all
the MHs for two different problem instances
(medium and high difficulty, respectively). These
two instances are real data taken from our MSCC’s
production environment during two different days at
the same hour (from 12:40 to 12:45, 300 seconds): a
one-day campaign and a normal day. The size of the
time-frame to execute all the MHs is 300 seconds (5
minutes) because we need to provide the system
with a solution each 300 seconds (continuous re-
planning of the ACD). We have selected this time
interval because this hour (between 12:30 and
13:00) is very representative as this is precisely the
most critical hour of the day (highest load of the
day: n/m). Note that around 800 incoming calls (n)
simultaneously arrive during a normal day in such a
time interval, whereas up to 2450 simultaneous
incoming calls may arrive during this interval during
a commercial campaign. The number of agents (m),
for each time interval, oscillates between 700 and
2100, having 16 different skills for each agent on
average (minimum=1 and maximum=108), grouped
in profiles of 7 skills on average. The total number
of CGs considered for this study is 167. Therefore,
when the workload (n/m) is really high, finding the
right assignment among agents and incoming calls
becomes fundamental. In this way, we have run
every MH under two double-core processors of a
Sun Fire E4900 server (one processor for the
interfaces and data pre-processing, and the other one
for each MH).
Once the magnitude of our MSCC has been
presented, each MH is compared alongside the
others. Table 1 summarises the results obtained by
each MH in 50 executions, starting from 50 different
randomly generated initial solutions.
In our comparative study, we present dissimilar
MHs which cover diverse strategies. Theoretically,
due to the local character of the basic LS, it is
complicated to reach a high-quality solution because
the algorithm usually gets trapped in a
neighbourhood when a local minimum is found.
This occurs because the engine is always looking for
better solutions which probably do not actually exist
in the neighbourhood. For this reason, sometimes, it
is more appropriate to allow deterioration
movements in order to switch to other regions of the
search space. This is precisely the shrewd policy of
SA whose temperature allows for many oscillations
(the probability of accepting a worse solution
decreases according to the time) at the beginning of
the process and only few ones at the end (fewer
chances to select a worse solution as the algorithm is
supposed to be refining the solution at this point).
Specifically, we have chosen Cauchy’s criterion
because the convergence is faster than Boltzmann’s
and we only have 30
0 seconds to run the complete
process. Besides, this scheme avoids decreasing the
distance between two solutions when the process
converges (jumps in the neighbourhood). Therefore,
the temperature must be high enough at the
beginning to better explore the search space (its
neighbourhood) and low enough at the end to
intensify the search as well (exploitation of
promising areas). The value for speed is, therefore,
the stopping condition which must agree with the
number of neighbours generated.
Table 1 gathers the results obtained by each MH
in 50 different executions for two different problem
instances with the purpose of providing a fair
comparison. The first three columns are the best,
worst and mean fitness values, respectively. Then,
we have the standard deviation and the effectiveness
(best fitted solution represents the 100%).
We perceive from Table 1 that SA worse
behaves than the other MHs except for the easiest
instance of the problem. This may occur because we
are not plenty of time in our environment and the
power of SA relies on a progressive cooling. If we
cool off the temperature too fast, we are missing the
effectiveness of accepting worse solutions in some
cases. Instead, if we cool off the temperature too
slowly, we may be accepting worse solutions
systematically without converging. We have applied
a trade-off between exploration and exploitation but
the time seems to be limited to apply SA to our
environment (perhaps, things might change when
having more time).
Another option to increase the diversity in the
solutions is to enlarge the environment, as VNS
does. This philosophy consists of making a
systematic change upon the environment when the
LS is used, increasing the environment when the
process becomes stagnated. In the VNS, the search
is not restricted to only one environment as in the
basic LS; instead, the neighbourhood changes as the
algorithm progresses. Albeit we only consider three
distinct neighbourhoods, the improvement of the
VNS compared to basic LS is noteworthy.
Consequently, the remarkable factor becomes the
change in the number of neighbourhoods and their
sizes as well as to consider how the algorithm reacts
in response.
Table 1 also shows how VNS only slightly
outperforms SA for the hardest instance of the
problem.
Another strategy is to start from different initial
solutions as ILS accomplishes. ILS generates a
ICEC 2010 - International Conference on Evolutionary Computation
356