2.4 Uncoarsening
The uncoarsening phase refers to the inverse process
followed during the coarsening phase. Having im-
proved the assignment at the level level
m+1
, the as-
signment must be extended on its parent level
m
. The
extension algorithm is simple; if a given cluster C
i
be-
longing to an individual in the population at the level
j is assigned the value of true then the merged pair K
l
and K
m
of clusters that it represents, are also assigned
the true value.
2.5 Improvement
The idea of improvement is to use the projected pop-
ulation at level
m+1
as the initial population for level
m
for further refinement using a memetic algorithm de-
scribed in the next section. Even though the popula-
tion at the level
m+1
is at a local minimum, the pro-
jected population may not be at a local optimum with
respect to Level
m
. The projected population is al-
ready a good solution and contains individuals with
high fitness value, MA will converge quicker within
a few generation to a better assignment. As soon as
the population tends to loose its diversity, premature
convergence occurs and all individuals in the popu-
lation tend to be identtical with almost the same fit-
ness value. During each level, the memetic algorithm
is assumed to reach convergence when no further im-
provement of the best solution has not been made dur-
ing five consecutive generations.
3 EXPERIMENTAL RESULTS
3.1 Test Suite
The performance of the multilevel memetic algorithm
(MMA) was tested on a set of large problem instances
taken from real industrial problems. All the bench-
mark instances used in this experiment are satisfiable
instances. Due to the randomization nature of the al-
gorithms, each problem instance was run 100 times.
Time limit is set to 15 minutes. The tests were car-
ried out on a DELL machine with 800 MHz CPU and
2 GB of memory. The code was written in C and
compiled with the GNU C compiler version 4.6. The
following parameters have been fixed experimentally
and are listed below:
• Crossover probability = 0.85;
• Mutation probability = 0.1;
• Population size = 50;
• Stopping criteria for the coarsening phase: The
coarsening stops as soon as the size of the coarsest
problem reaches 100 variables (clusters). At this
level, MA generates an initial population.
• Convergence during the refinement phase: If there
is no observable improvement of the fitness func-
tion of the best individual during 10 consecutive
generations, MA is assumed to have reached con-
vergence and moves to a higher level.
3.2 Analysis of Results
Table 1 shows the range of all solved clauses (RAC),
the mean solved clauses (MSC) and the range of
solved clauses (RSC). As can be seen in Table 1
there is no overlap between the observed ranges for
MA and MMA. Hence all observed runs of MMA
is found to be better (closer to the solution) than the
runs of MA. The actual domination of MMA versus
MA is strengthened by the fact that a none of the
99% confidence intervals for the mean difference be-
tween MMA and MA contains the value 0. Finally,
we can see that MMA have better asymptotic conver-
gence (to around 0.39%−0.95% in excess of the solu-
tion ) as compared to MA which only achieve around
(10, 05% −11, 95%). We noticed that for small prob-
lems MA dominates MMA during the start of the
search, however as the time increases, MMA has a
marginally better asymptotic convergence for small
problems compared to MA while the convergence be-
havior becomes more distinctive for larger problems.
4 CONCLUSIONS
A new approach for addressing the satisfiability prob-
lem which combines the multilevel paradigm with a
simple memetic algorithm have been tested. A set
of industrial benchmark instances was used in or-
der to get a comprehensive picture of the new algo-
rithm’s performance. The multilevel memetic algo-
rithm clearly outperformed the simple memetic algo-
rithm in all instances. Results also show that the dif-
ference in performance between the two algorithms
increases for larger problems. The experiments have
shown that MLMA works well with a random coars-
ening scheme combined with a simple memetic algo-
rithm used as a refinement algorithm. The random
coarsening provided a good global view of the prob-
lem, while the memetic algorithm used during the re-
finement phase provided a good local view. It can
be seen from the results that the multilevel paradigm
greatly improves the memetic algorithm and always
returns a better solution for the equivalent runtime.
ImprovingtheAsymptoticConvergenceofMemeticAlgorithms-TheSATProblemCaseStudy
295