classes due to communication among threads (infor-
mation about actual searching state):
1. independent search processes,
2. cooperative search processes.
If the multithread application (i.e. concurrently work-
ing search processes) does not exchange any infor-
mation we can talk about independent processes of
search. However, if information accumulated dur-
ing an exploration of the trajectory is sent to another
searching process and used by it, then we can talk
about cooperative processes (Bo˙zejko et al., 2008a).
We can also come across a mixed model, so-called
semi-independent (Czech, 2002) executing indepen-
dent search processes keeping a number of common
data.
2 THE METHODOLOGY OF
METAHEURISTICS
PARALLELIZATION
The majority of practical artificial intelligence issues,
especially connected with planning and jobs schedul-
ing, belongs to the class of strongly NP-hard prob-
lems, which require complex and time-consuming so-
lution algorithms. Two main approaches are used to
solve these problems: exact methods and metaheuris-
tics. From one side, existing exact algorithms solv-
ing NP-hard problems possess an exponential compu-
tational complexity – in practice they are extremely
time-consuming. From the other side, metaheuris-
tics provide with suboptimal solutions in a reasonable
time, even being applied in real-time systems.
Quality of the best solutions determined by ap-
proximate algorithms depends, in most cases, on the
number of analyzed solutions, therefore on the time
of computations. Time and quality demonstrates op-
posite tendency in the sense that to obtain a better
solution requires significant increase of computations
time. Parallel algorithms construction makes it possi-
ble to increase significantly the number of considered
solutions (in a unit of time) effectively using multi-
processor computing environment.
The process of an optimization algorithm par-
allelization is strongly connected with the solution
space search method used by this algorithm. The
most frequent are the two following approaches: ex-
ploitation (or search intensification) and exploration
(or search diversification) of the solution space. Due
to this classification we can consider major cate-
gories of the metaheuristic class such as: local search
methods (i.e. tabu search TS, simulated anneal-
ing SA, greedy randomized adaptive search proce-
dure GRASP, variable neighborhood search VNS)
and population-based algorithms (i.e. genetic algo-
rithm GA, evolutionary strategies ESs, genetic pro-
gramming GP, scatter search SS, ant colony optimiza-
tion ACO, memetic algorithm MA, estimated dis-
tribution algorithms EDAs). Local search methods
(LSM) start with a single initial solution improving it
in each step by neighborhood searching. LSMs often
find a locally optimal solution – they are focused on
the solution space exploitation. Population-based al-
gorithms (PBAs) use a population of individuals (so-
lutions), which is improved in each generation. It
means that the average goal function of the whole
population usually improves itself – it does not equal
improving of all the individuals. The whole process is
randomized, so these methods are almost always non-
deterministic. We can say that PBAs are focused on
the solution space exploration.
3 POPULATION-BASED
ALGORITHMS
Population-based algorithms (genetic, memetic, par-
ticle swarm optimization, etc.) are well-suited to par-
allelization due to its natural partitioning onto sepa-
rate groups of solutions, which are concurrently pro-
cessed. The method of using population of individu-
als allows us to diversify searching process onto the
whole solution space. On the other hand, using coop-
eration, it is easy to intensify the search after finding
a good region by focusing individuals onto it. Thanks
to its concurrent nature, population-based algorithms
are very handy to parallelize, especially in the inde-
pendent way using multi-start model. Low level par-
allelization is not so easy because special properties
of the considered problem have to be usually used.
3.1 Genetic Algorithm
Genetic Algorithm (GA) method is an iterative tech-
nique that applies stochastic operators on a set of in-
dividuals (population). Each individual of the popu-
lation encodes the complete solution. Starting popu-
lation is usually generated randomly. A GA applies a
recombination operator (crossover) on two solutions
in order to introduce diversity of population. Addi-
tionally, a mutation operator which randomly modi-
fies an individual is applied as the insurance against
stagnation of the search process. Traditionally GAs
were associated with the binary representation of a
solution, however in jobs scheduling area a permu-
tational solution representation is more popular and
useful.
SINGLE-WALK PARALLELIZATION OF THE GENETIC ALGORITHM
587