PRUNING SEARCH SPACE BY DOMINANCE RULES IN BEST
FIRST SEARCH FOR THE JOB SHOP SCHEDULING PROBLEM
Mar´ıa R. Sierra
Dept. of Mathematics, Statistics and Computing, University of Cantabria
Facultad de Ciencias, Avda. de los Castros, E-39005 Santander, Spain
Ramiro Varela
Artificial Intelligence Center, Dept. of Computing, University of Oviedo
Campus de Viesques, E-33271 Gij´on, Spain
Keywords:
Heuristic Search, Best First Search, Pruning by Dominance, Job Shop Scheduling.
Abstract:
Best-first graph search is a classic problem solving paradigm capable of obtaining exact solutions to optimiza-
tion problems. As it usually requires a large amount of memory to store the effective search space, in practice it
is only suitable for small instances. In this paper, we propose a pruning method, based on dominance relations
among states, for reducing the search space. We apply this method to an A
algorithm that explores the space
of active schedules for the Job Shop Scheduling Problem with makespan minimization. The A
algorithm is
guided by a consistent heuristic and it is combined with a greedy algorithm to obtain upper bounds during the
search process. We conducted an experimental study over a conventional benchmark. The results show that
the proposed method is able to reduce both the space and the time in searching for optimal schedules so as it
is able to solve instances with 20 jobs and 5 machines or 9 jobs and 9 machines. Also, the A
is exploited with
heuristic weighting to obtain sub-optimal solutions for larger instances.
1 INTRODUCTION
In this paper we propose a method based on domi-
nance properties to reduce the effective space in best-
first search. The method is illustrated with an appli-
cation of the A
algorithm (Hart et al., 1968; Nils-
son, 1980; Pearl, 1984) to the Job Shop Scheduling
Problem (JSSP) with makespan minimization. We es-
tablish a sufficient condition for a state n
1
dominates
another state n
2
so as n
2
can be pruned. Also, we have
devised a rule to evaluate this condition efficiently.
The overall result is a substantial reduction in both
the time and mainly in the space required for search-
ing optimal schedules.
Over the last decades, a number of methods has
been proposed in the literature to deal with the JSSP
with makespan minimization. In particular there are
some exact methods such as the branch and bound
algorithm proposed in (Brucker et al., 1994) or the
backtracking algorithm proposed in (Sadeh and Fox,
1996).
As the majority of the efficient methods for the
JSSP with makespan minimization, the Brucker’s al-
gorithm relies on the concept of critical path, i.e. a
longest path in the solution graph representing the
processing order of operations in a solution. In par-
ticular, the branching schema is based on reversing
orders on the critical path. The main problem of the
methods based on the critical path is that they can not
be efficiently adapted to objectivefunctions other than
makespan.
The algorithm proposed in (Sadeh and Fox, 1996)
is guided by variable and value ordering heuristics
and its branching schema is based on starting times of
operations. It is not as efficient as the Brucker’s algo-
rithm for makespan minimization, but it can be easily
adapted for other classic objective functions such as
total flow time or tardiness minimization. In this pa-
per, we consider the search space of active schedules
in order to evaluate the proposed method for pruning
by dominance. This search space is suitable for any
objective function.
The paper is organized as follows. In section
2 the JSSP is formulated. Section 3 describes the
search space of active schedules for the JSSP. Sec-
tion 4 sumarizes the main characteristics of A
algo-
273
R. Sierra M. and Varela R. (2008).
PRUNING SEARCH SPACE BY DOMINANCE RULES IN BEST FIRST SEARCH FOR THE JOB SHOP SCHEDULING PROBLEM.
In Proceedings of the Third International Conference on Software and Data Technologies - PL/DPS/KE, pages 273-280
DOI: 10.5220/0001896102730280
Copyright
c
SciTePress
rithm. In section 5, the heuristic used to guide A
for
the JSSP with makespan minimization is described.
Section 6 introduces the concepts of dominance and
establishes some results and an efficient rule to test
dominancefor the JSSP. Section 7 reports results from
the experimental study. Finally, section 8 summarizes
the main conclusions and outlines some ideas for fu-
ture research.
2 PROBLEM FORMULATION
The Job Shop Scheduling Problem (JSSP) requires
scheduling a set of N jobs {J
1
,...,J
N
} on a set of M
resources or machines {R
1
,...,R
M
}. Each job J
i
con-
sists of a set of tasks or operations {θ
i1
,...,θ
iM
} to
be sequentially scheduled. Each task θ
il
has a single
resource requirement R
θ
il
, a fixed duration p
θil
and a
start time st
θ
il
to be determined.
The JSSP has three constraints: precedence, ca-
pacity and no-preemption. Precedence constraints
translate into linear inequalities of the type: st
θ
il
+
p
θ
il
st
θ
i(l+1)
. Capacity constraints translate into dis-
junctive constraints of the form: st
v
+ p
v
st
w
st
w
+
p
w
st
v
, if R
v
= R
w
. No-preemption requires that the
machine is assigned to an operation without interrup-
tion during its whole processing time. The objective
is to come up with a feasible schedule such that the
completion time, i.e. the makespan, is minimized.
In the sequel a problem instance will be repre-
sented by a directed graph G = (V,AE). Each node
in the set V represents an actual operation, with the
exception of the dummy nodes start and end, which
represent operations with processing time 0. The arcs
of A are called conjunctive arcs and represent prece-
dence constraints, and the arcs of E are called disjunc-
tive arcs and represent capacity constraints.
E is partitioned into subsets E
i
with E =
i=1,...,M
E
i
. E
i
includes an arc (v, w) for each pair of
operations requiring R
i
. The arcs are weighed with
the processing time of the operation at the source
node. Node start is connected to the first operation
of each job and the last operation of each job is con-
nected to node end.
A feasible schedule is represented by an acyclic
subgraph G
s
of G, G
s
= (V,A H), where H =
i=1,...,M
H
i
, H
i
being a processing ordering for the op-
erations requiring R
i
. The makespan is the cost of a
critical path. A critical path is a longest path from
node start to node end.
In order to simplify expressions, we define the fol-
lowing notation for a feasible schedule. The head r
v
of an operation v is the cost of the longest path from
node start to node v, i.e. it is the value of st
v
. The
tail q
v
is defined so as the value q
v
+ p
v
is the cost of
the longest path from v to end. Hence, r
v
+ p
v
+ q
v
is the makespan if v is in a critical path, otherwise,
it is a lower bound. PM
v
and SM
v
denote the prede-
cessor and successor of v respectively on the machine
sequence and PJ
v
and SJ
v
denote the predecessor and
successor nodes of v respectively on its job.
A partial schedule is given by a subgraph of G
where some of the disjunctive arcs are not fixed yet.
In such a schedule, heads and tails can be estimated
as
r
v
= max{max
wP(v)
(r
w
+ p
w
),r
PJ
w
+ p
PJ
w
}
q
v
= max{max
wS(v)
(p
w
+ q
w
), p
SJ
v
+ q
SJ
v
}
(1)
where P(v) denotes the disjunctive predecessors of v,
i.e. operations requiring machine R
v
which are sched-
uled before v. Analogously, S(v) denotes the disjunc-
tive successors of v. Hence, the value r
v
+ p
v
+ q
v
is a
lower bound of the best schedule that can be reached
from the partial schedule. This lower bound may be
improved from the Jackson’s preemptive schedule, as
we will see in section 5.
3 THE SEARCH SPACE OF
ACTIVE SCHEDULES
A schedule is active if for an operation can start ear-
lier at least another one should be delayed. Maybe the
most appropriate strategy to calculate active sched-
ules is the G&T algorithm proposed in (Giffler and
Thomson, 1960). This is a greedy algorithm that pro-
duces an active schedule in a number of N M steps.
At each step G&T makes a non-deterministic
choice. Every active schedule can be reached by
taking the appropriate sequence of choices. There-
fore, by considering all choices, we have a complete
search tree for strategies such as branch and bound,
backtracking or A
. This is one of the usual branch-
ing schemas for the JSSP, as pointed in (Brucker and
Knust, 2006), and it is the approach taken, for exam-
ple, in (Varela and Soto, 2002) and (Sierra and Varela,
2005).
Algorithm 1 shows the expansion operation that
generates the full search tree when it is applied suc-
cessively from the initial state, in which none of the
operations are scheduled yet.
In the sequel, we will use the following notation.
Let O denote the set of operations of a problem in-
stance, and n
1
and n
2
be two search states. In n
1
, O
can be decomposed into the disjoint union SC(n
1
)
US(n
1
), where SC(n
1
) denotes the set of operations
scheduled in n
1
and US(n
1
) denotes the unscheduled
ICSOFT 2008 - International Conference on Software and Data Technologies
274
Algorithm 1: SUC(state n). Algorithm to expand a
s
tate n. When it is successively applied from the ini-
tial state, i.e. an empty schedule, it generates the
whole search space of active schedules.
1. A = {v US(n); PJ
v
SC(n)};
2. Let v A the operation with the lowest completion
time if it is scheduled next, that is r
v
+ p
v
r
u
+ p
u
,u
A;
3. B = {w A; R
w
= R
v
and r
w
< r
v
+ p
v
};
for each w B do
4. SC(n
) = SC(n) {w} and US(n
) = US(n)\{w};
\∗ w gets scheduled in the current state n
∗\
5. G
n
= G
n
{w v; v US(n
), R
v
= R
w
};
\∗ st
w
is set to r
w
in n
and the arc(w,v) is added to
the partial solution graph ∗\
6. c(n, n
) = max{0,(r
w
+ p
w
) max{(r
v
+ p
v
),v
SC(n)}};
7. Update heads of operations in US(n
) accordingly
with expression (1);
8. Add n
to successors;
end for
9. return successors;
ones. D(n
1
) = |SC(n
1
)| is the depth of node n
1
in the
search space. Given O
O, r
n
1
(O
) is the vector of
heads of operations O
in state n
1
. r
n
1
(O
) r
n
2
(O
)
iff for each operation v O
, r
v
(n
1
) r
v
(n
2
), r
v
(n
1
)
and r
v
(n
2
) being the head of operation v in states n
1
and n
2
respectively. Analogously, q
n
1
(O
) is the vec-
tor of tails.
4 BEST-FIRST SEARCH
For best-first search we have chosen the A
Nilsson’s
algorithm (Hart et al., 1968; Nilsson, 1980; Pearl,
1984). A
starts from an initial state s, a set of goal
nodes Γ and a transition operator SUC such that for
each node n of the search space, SUC(n) returns the
set of successor states of n. Each transition from n to
n
has a positive cost c(n,n
). P
s-n
denotes the mini-
mum cost path from node s to node n. The algorithm
searches for a path P
s-o
with the optimal cost, denoted
C
.
The set of candidate nodes to be expanded are
maintained in an ordered list OPEN. The next node
to be expanded is that with the lowest value of the
evaluation function f, defined as f(n) = g(n) + h(n);
where g(n) is the minimal cost known so far from s to
n, (of course if the search space is a tree, the value of
g(n) does not change, otherwise this value has to be
updated as long as the search progresses) and h(n) is
a heuristic positive estimation of the minimal distance
from n to the nearest goal.
If the heuristic function underestimates the ac-
tual minimal cost, h
(n), from n to the goals, i.e.
h(n) h
(n), for every node n, the algorithm is ad-
missible, i.e. it returns an optimal solution. Moreover,
if h(n
1
) h(n
2
)+ c(n
1
,n
2
) for every pair of states n
1
,
n
2
of the search graph, h is consistent. Two of the
properties of consistent heuristics are that they are ad-
missible and that the sequence of values f(n) of the
expanded nodes is non-decreasing.
The heuristic function h(n) represents knowledge
about the problem domain, therefore as long as h ap-
proximates h
the algorithm is more and more effi-
cient as it needs to expand a lower number of states to
reach the optimal solution.
Even with consistent and well-informed heuris-
tics, the cost of the search becomes prohibitive for
not-too-large instances. In that case, it is possible to
relax the requirement of admissibility and modify the
algorithm to obtain near optimal solutions. Maybe,
the most common technique to do that is dynamic
weighting of the heuristic h. The rationale behind
weighting is to enlarge the value of h(n) so as it is
closer to h
(n). In order to do that it is common to use
an evaluation function of the form proposed in (Pohl,
1973)
f(n) = g(n) + P(n)h(n) (2)
where P(n) 1 is the weighting factor; this factor
may be calculated as
P(n) = 1 + K(1 d(n)/D) (3)
where K 0 is a parameter and d(n) and D are the
depth of node n in the search space and the maximum
depth of a node respectively. With dynamic weight-
ing it is expected that the number of nodes expanded
to reach a solution is lower than that with the origi-
nal A
, but the admissibility is not preserved; however
the cost of the first solution state reached is not larger
than C
(1+ K). As this node is not usually optimal, it
makes sense to leave A
searching for more solutions
after the first one.
Also, best first search may be combined with
greedy algorithms to obtain upper bounds during the
search. For example, just before to expand a node n,
the greedy algorithm can be run to solve the subprob-
lem represented by n. If this process results to be very
time consuming, greedy algorithm may be run with a
small probability. This is the approach taken in our
experimental study.
PRUNING SEARCH SPACE BY DOMINANCE RULES IN BEST FIRST SEARCH FOR THE JOB SHOP
SCHEDULING PROBLEM
275
Figure 1: The Jackson’s Preemptive Schedule for an OMS
problem instance.
5 A HEURISTIC FOR THE JSSP
Here, we use a heuristic for the JSSP based on prob-
lem relaxations. The residual problem represented by
a state n is given by the unscheduled operations in
n together with their heads and tails, i.e. the triplet
J(n) = (US(n), r
n
(US(n)), q
n
(US(n))). In state n a
number of jobs in J have all their operations sched-
uled, whilst the remaining ones have some operations
not scheduled yet, these subsets of J will be denoted
as
J
US
(n) = {J
i
J; j, 1 j M,θ
ij
US(n)}
J
SC
(n) = J\J
US
(n)
(4)
Also, we denote by C
max
(J
SC
(n)) to the maximum
completion time of jobs in J
SC
(n), i.e.
C
max
(J
SC
(n)) = max{r
θ
iM
(n) + p
θ
iM
,J
i
J
SC
(n)}
(5)
with C
max
(J
SC
(n)) = 0 if J
SC
(n) = .
A problem relaxation can be made in the follow-
ing two steps. Firstly, for each machine m required by
at least one operation in US(n), the simplified prob-
lem J(n)|
m
= (US(n)|
m
, r
n
(US(n)|
m
), q
n
(US(n)|
m
))
is considered, where US(n)|
m
denotes the unsched-
uled operations in n requiring machine m. Prob-
lem J(n)|
m
is known as the One Machine Sequencing
(OMS) with heads and tails, where an operation v is
defined by its head r
v
, its processing time p
v
over ma-
chine m, and its tail q
v
. This problem is still NP-hard,
so a new relaxation is made: the no-preemption of
machine m. This way an optimal solution to this prob-
lem is given by the Jacksons preemptive schedule
(JPS) (Carlier and Pinson, 1989; Carlier and Pinson,
1994).
Figure 1 shows an example of OMS instance and
a JPS for it. The JPS is calculated by the following
algorithm: at any time t given by a head or the com-
pletion of an operation, from the minimum r
v
until
all jobs are completely scheduled, schedule the ready
operation with the largest tail on machine m. Carlier
and Pinson proved in (Carlier and Pinson, 1989; Car-
lier and Pinson, 1994) that calculating the JPS has a
complexity of O(K × log
2
(K)), where K is the num-
ber of operations.
The JPS of problem J(n)|
m
provides a lower
bound of f
(n) due to the fact that the heads of op-
erations of US(n)|
m
are adjusted from the scheduled
operations SC(n). So, taking the largest of these val-
ues over machines with unscheduled operations and
taking into account the value C
max
(J
SC
(n)), a lower
bound of f
(n) is obtained. Then, to obtain a lower
bound of h
(n), the value of the largest completion
time of operations in SC(n), i.e. g(n), should be dis-
counted and the heuristic, termed h
JPS
, is calculated
as
h
JPS
(n) = max{C
max
(J
SC
(n)),JPS(J(n))} g(n)
JPS(J(n)) = max
mR
{JPS(J(n)|
m
)}
(6)
As h
JPS
is devised from a problem relaxation, it is
consistent (Pearl, 1984).
6 DOMINANCE PROPERTIES
Given two states n
1
and n
2
, we say that n
1
dominates
n
2
if and only if the best solution reachable from n
1
is better, or at least of the same quality, than the best
solution reachable from n
2
. In some situations this
fact can be detected and then the dominated state can
be early pruned.
Let us consider a small example. Figure 2 shows
the Gantt charts of two partial schedules, with three
operations scheduled, corresponding to search states
for a problem with 2 jobs and 3 or more machines.
If the second operation of job J
1
requires R
2
and the
third operation of J
2
requires R
3
, it is easy to see that
the best solution reachable from the state of Figure
2a can not be better than the best solution reachable
from the state of Figure 2b. This is due to the resid-
ual problem of both states comprising the same set of
operations and in the first state the heads of all opera-
Figure 2: Partial schedules of two search states, state b)
dominates state a).
ICSOFT 2008 - International Conference on Software and Data Technologies
276
tions are larger or at least equal than the heads in the
second state. So, the state of Figure 2a may be pruned
if both states are simultaneously in memory.
Of course, a good heuristic will lead the search to
explore first the state of Figure 2b if both of them are
in OPEN at the same time. However, at a later time,
the state of Figure 2a and a number of its descendants
might also be expanded. Consequently, early pruning
of this state can reduce the space and, if the compar-
ison of states for dominance is done efficiently, also
the search time.
Pruning by dominance is not new in heuristic
search. For example, in (Nazaret et al., 1999) a simi-
lar method is proposed for the Resource Constrained
Project Scheduling Problem (RCPSP), but no clear
rules are given to apply it during the search; and
in (Korf, 2003) and (Korf, 2004) various rules are
proposed for the Bin Packing Problem and the two-
dimensional Cutting Stock Problem respectively that
allow pruning some of the siblings of a node n at the
time of expanding this node.
More formally, we define dominanceamong states
as it follows.
Definition 1
. Given two states n
1
and n
2
, such that
n
1
/ P
s-n
2
a
nd n
2
/ P
s-n
1
, n
1
dominates n
2
if and only
if f
(n
1
) f
(n
2
).
Of course, establishing dominance among any two
states is problem dependent and it is not easy in gen-
eral. Therefore, to define an efficient strategy, it is
not possible to devise a complete method to determine
dominance and apply it to every pair of states of the
search space. So, what we have done is establishing a
sufficient condition for dominance for the JSSP with
makespan minimization. As we will see, this condi-
tion can be efficiently evaluated, so as the whole pro-
cess of testing dominance is efficient, at the cost of
not detecting all dominated states.
Theorem 1. Let n
1
and n
2
be two states such
that US(n
2
) = US(n
1
) = US, f (n
1
) f(n
2
) and
r
n
1
(US) r
n
2
(US), then the following conditions
hold:
1. q
n
1
(US) = q
n
2
(US).
2. n
1
dominates n
2
.
Proof 1. Condition 1 comes from the fact that each
o
peration v US is an unscheduled operation in
both states n
1
and n
2
and so it has not any disjunc-
tive successor yet. So, according to equations (6),
q
v
(n
1
) = p
SJ
v
+q
SJ
v
(n
1
) and q
v
(n
2
) = p
SJ
v
+q
SJ
v
(n
2
).
As q
end
(n
1
) = q
end
(n
2
) = 0, reasoning by induction
from node end backwards, we have finally q
v
(n
1
) =
q
v
(n
2
). Hence, q
n
1
(US) = q
n
2
(US).
To prove condition 2 we can reason as fol-
lows. Let us denote as C
max
(J(n)) to the optimal
makespan of subproblem J(n). Hence, for a state
n, f
(n) = max{C
max
(J
SC
(n)),C
max
(J(n))}, f(n) =
max{C
max
(J
SC
(n)),JPS(J(n))}, being JPS(J(n))
C
max
(J(n)).
From r
n
1
(US) r
n
2
(US) and q
n
1
(US) = q
n
2
(US)
it follows that C
max
(J(n
1
)) C
max
(J(n
2
)), as every
schedule for problem J(n
2
) is also a schedule for
J(n
1
), and for analogous reason considering pre-
emtive schedules, it also follows that JPS(J(n
1
))
JPS(J(n
2
)). From this result and f(n
1
) f(n
2
) it
follows that
(a) C
max
(n
1
) C
max
(n
2
) or
(b) C
max
(n
1
) > C
max
(n
2
) andC
max
(n
1
) JPS(J(n
2
)).
In the case (a) as f
(n
1
) =
max{C
max
(J
SC
(n
1
)),C
max
(J(n
1
))}, analogous for
f
(n
2
), it follows that f
(n
1
) f
(n
2
).
In the case (b) f
(n
2
) = C
max
(J(n
2
)))
max{C
max
(J
SC
(n
1
)),C
max
(J(n
1
))} = f
(n
1
). So n
1
dominates n
2
.
6.1 Rule for Testing Dominance
From the results above, we can devise rules for test-
ing dominance to be included in the A
algorithm. In
principle each time a new node n
1
appears during the
search, this node could be compared with any other
node n
2
reached previously. In this comparison, it
should be verified if n
1
dominates n
2
and also if n
2
dominates n
1
. If one of the nodes is dominated, it can
be pruned. It could be the case that both n
1
dominates
n
2
and n
2
dominates n
1
; in this case either of them,
but not both, can be pruned.
Obviously, this rule does not seem very efficient.
So, in order to reduce the number of evaluations, we
proceed as follows:
1. Each time a node n is selected by A
for expan-
sion, n is compared with every node n
in OPEN
such that D(n
1
) = D(n
2
) and f(n) = f(n
). If any
of the nodes become dominated, it is pruned. In
the case that both n dominates n
and n
dominates
n, n
is pruned.
2. If node n is not pruned in step 1, it is compared
with those nodes n
in the CLOSED list such that
US(n
) = US(n) (f(n
) f(n) as a consequence
of the consistency of the heuristic h
JPS
). If n
dominates n, then n is pruned.
In step 1, n is not compared with nodes n
in
OPEN with f(n) < f(n
). In this situation, n could
dominate n
, but this will be detected later if n
is se-
lected for expansion, as n will be in CLOSED. In
step 1 nodes n
with D(n
) = D(n) in OPEN are ef-
ficiently searched as OPEN is organized as an array
PRUNING SEARCH SPACE BY DOMINANCE RULES IN BEST FIRST SEARCH FOR THE JOB SHOP
SCHEDULING PROBLEM
277
with a component for each possible depth, from 1 to
N M, and in each component a list of nodes sorted
by f values is stored. Similarly, in step 2, nodes n
with US(n
) = US(n) may be efficiently searched in
CLOSED as this structure is organized as a hash table
with the hash function returning the set of unsched-
uled operations US(n) for each state n.
7 EXPERIMENTAL STUDY
For experimental study we have chosen two
sets of instances taken from the OR-library
(http://people.brunel.ac.uk/mastjjb/jeb/info.html).
First we have chosen 6 instances of size 20 × 5 (20
jobs and 5 machines): LA01 to LA05 and FT20.
Then we have chosen instances of size 10 × 10. The
reason for these selection is that these sizes are in the
threshold of what our approach is able to solve. We
used an A
prototype implementation coded in C++
language developed in Builder C++ 6.0 for Windows,
the target machine was Pentium 4 at 3Ghz with 2Gb
RAM.
To evaluate the efficiency of the proposed prun-
ing method, we first solved these instances without
considering upper bounds. So, none of the generated
states n can be pruned from the condition f(n) UB
and these nodes should be inserted in the OPEN
list, even though they will never be expanded due
to heuristic h
JPS
being admissible. Moreover, in this
case A
only completes the search either when a so-
lution state is reached or when the computational
resources (memory available or time limit) are ex-
hausted. This allows us to estimate the size of the
search space for these instances. We have givena time
limit of 3600 seconds for each run.
Columns 2 to 5 of Table 1 summarizes the results
of this experiment. As we can observe, when prun-
ing is not applied, instances LA11 and LA13 remain
unsolved due to memory getting exhausted. On the
other hand, when pruning is applied 5 of the six in-
stances get solved and the number of expanded nodes
is much lower in all 5 cases. Also, the time taken is
lower.
In the second experiments, we have enhanced A
by calculating upper bounds by means of a greedy
algorithm. As it was done in (Brucker et al., 1994;
Brucker, 2004) we have used the G&T algorithm with
a selection rule based on JPS computations restricted
to the machine required by critical operations, i.e.
those of set B in Algorithm 1. Here, with a given
probability P, a solution is issued from the expanded
node. Columns 6 and 7 of Table 1 reports results from
a set of experiments with P = 0,01; in this case the re-
Table 1: Summary of results of pruning by dominance over
instances LA01-15 and FT20. The last two columns show
results combining pruning by dominance with probabilis-
tic calculation of heuristic solutions during the search (the
heuristic algorithm is run from the initial state and then for
each expanded state with probability P = 0.01, the results
are averaged over 20 runs for each instance.
No Pruning Pruning Pruning + UB
P = 0,01
Inst. Exp. T.(s) Exp. T.(s) Exp. T.(s)
LA11 131470 143 105449 272 1 0
LA12 1689 1 965 2 127 1
LA13 111891 141 13599 33 10206 25
LA14 258 0 257 0 1 0
LA15 76967 93 22068 46 22066 46
FT20 9014 7 2756 4 2753 5
bold indicates memory getting exhausted.
sults are averaged over 20 runs. As we can observe,
in this case all 6 instances get solved, being both the
time taken and the number of expanded nodes less
than they are in the experiments without upper bounds
calculation.
In the third series of experiments we apply the
same method to a set of instances with size 9× 9 ob-
tained from the ORB set by eliminating the last job
and the last machine. The results are reported in Ta-
ble 2. As we can observe, when pruning is not applied
only 3 out of the 10 instances get solved; while 7 in-
stances get solved with pruning and for the 3 previ-
ously solved the number of expanded nodes is much
lower as well. However in this case the effect of
the greedy algorithm is almost null for the instances
solved. But for the 3 instances unsolved, it seems that
the greedy algorithm allows to prevent many states to
be included in OPEN, as the memory gets exhausted
after a larger number of expanded nodes.
In the last series of experiments, we have con-
sidered the original ORB set with instances of size
10 × 10. As only one of these instances gets solved
to optimality with the exact algorithm, we applied the
heuristic weighting with K = 0,01. In this case, A
is
not stopped after reaching a solution state, but it runs
until the memory gets exhaustedor the OPEN list gets
empty. Table 3 summarizes the results of these exper-
iments. As we can observe, only for instance 10 the
OPEN list gets empty and so the optimal solution is
reached. For the remaining instances the memory gets
exhausted before reaching the optimal solution, being
the mean error in percent 2,86. This error is much
larger than that expected from the weighted heuristic.
The reason for this is that with K = 0, 01 A
never
reaches a solution node and the solution returned is
ICSOFT 2008 - International Conference on Software and Data Technologies
278
Table 2: Summary of results from ORBR(9× 9) instances
obtained from reduction of ORB instances. The results of
the last two columns are averaged for 20 runs.
No Pruning Pruning Pruning + UB
P = 0,01
Inst Exp. T.(s) Exp. T.(s) Exp. T.(s)
1 229245 133 36043 42 36043 45
2 268186 149 31714 31 31708 34
3 467267 333 265217 1121 315933 1745
4 494016 329 79629 116 79628 122
5 588378 332 278995 738 379328 1320
6 430959 320 182174 454 181384 457
7 561617 335 74528 164 74192 165
8 427836 315 260231 1448 350872 2301
9 614638 352 272595 947 271024 950
10 525388 325 106407 165 102494 158
bold indicates memory getting exhausted.
the best one reached by the greedy algorithm.
Overall, we can conclude that the proposed
method for pruning by dominance allows to reduce
drastically the size of the effective search space; be-
ing this reduction more relevant for the most difficult
problems. As we can observe in Table 2 for the in-
stances that get solved in both cases, i.e. with pruning
and without it, the number of expanded states is re-
duced almost in an order of magnitude when pruning
is exploited. This reduction is less significative for in-
stances of size 20× 5, as it is shown is Table 1, which
are easier to solve. However, the effect of the greedy
algorithm over these instances is clearly more signi-
ficative than it is over the instances of size 9×9. This
is due to the fact that the heuristic estimations ob-
tained from the Jackson’s Preemptive Schedules are
much more accurate for 20 × 5 instances than they
are for 9× 9 ones. Hence, both the greedy algorithm
Table 3: Summary of results combining pruning by domi-
nance, UB calculation (P = 0, 01) and heuristic weighting
(K = 0,01), over ORB(10× 10) instances.
Instance Optimum Best found Exp. nodes T.(s)
ORB01 1059 1078 193019 206
ORB02 888 915 207282 198
ORB03 1005 1071 156886 225
ORB04 1005 1052 161222 199
ORB05 887 893 209521 202
ORB06 1010 1050 189990 200
ORB07 397 405 216814 203
ORB08 899 917 257795 213
ORB09 934 970 180866 202
ORB10 944 944 22775 23
bold indicates memory getting exhausted
and the A* itself are much more efficient as they are
guided by better heuristic knowledge. These results
agree with those of other experiments, not reported
here, that we have done with less informed heuris-
tics. In this case the reduction of the effective search
space for instances 20 × 5 was also of almost an or-
der of magnitude, but in any case the performance
was worse than that of heuristic h
JPS
. Hence we con-
jecture that the effect of the pruning by dominance
is in inverse ratio with the knowledge of the heuris-
tic estimation, so it may be especially interesting for
complex problems where the current heuristics are not
very much accurate, as it is the case of the RCPSP.
8 CONCLUSIONS
In this paper we propose a pruning method based on
dominance relations among states to improve the ef-
ficiency of best-first search algorithms. We have ap-
plied this method to the JSSP considering the search
space of active schedules and the A
algorithm. To do
that, we have defined a sufficient condition for dom-
inance and a rule to evaluate this condition which is
efficient as it allows to restrict comparison of the ex-
panded node with only a fraction of nodes in OPEN
and CLOSED lists. This method is combined with a
greedy algorithm to compute upper bounds during the
search. We have reported results from an experimen-
tal study over instances taken from the OR-library.
These experiments showthat the proposed method
of pruning by dominance, combined with the greedy
algorithm to obtain upper bounds during the search
process, is efficient as it allows to save both space
and time. Also we have combined this method with a
weighting heuristic method that allows to obtain non-
optimal solutions for large instances.
In comparison with other methods, our approach
is more efficient than the backtracking algorithm pro-
posed in (Sadeh and Fox, 1996), which is not able to
solve instances of size 10 × 5; but it is less efficient
than the the branch and bound algorithm described in
(Brucker et al., 1994; Brucker, 2004), which is able
to solve instances of size 10× 10 or even larger. Only
one of the instances considered in our experimental
study, the FT20, can not be solved to optimality by
this algorithm.
The Brucker’s algorithm exploits a sophisticated
branching schema based on the concept of critical
path which is not applicable to objective functions
other than the makespan. However, the branching
schema based on G&T algorithm is also suitable for
objective functions such as total flow time or tardi-
ness. So it is expected that our approach is to be effi-
PRUNING SEARCH SPACE BY DOMINANCE RULES IN BEST FIRST SEARCH FOR THE JOB SHOP
SCHEDULING PROBLEM
279
cient in these cases as well.
As future work, we plan to improve our approach
with better heuristic estimations, new pruning rules
and more efficient greedy algorithms to obtain upper
bounds. Also, we plan to combine the pruning strat-
egy with constraint propagation techniques, such as
those proposed in (Dorndorf et al., 2000; Dorndorf
et al., 2002), as it is done in the branch and bound al-
gorithm described in (Brucker et al., 1994; Brucker,
2004).
It would be also interesting to apply the prun-
ing by dominance method to other search spaces for
the JSSP with makespan minimization and to other
scheduling problems which are harder to solve such
as the JSSP with total flow time or tardiness mini-
mization; and the the JSSP with setup times.
We will confront other problems such as the Trav-
elling Salesman Problem, the Cutting-Stock Prob-
lem or the RCPSP. As search spaces of these prob-
lems have similar characteristics to the space of active
schedules for the JSSP, we expect to obtain similar
improvement of efficiency in all cases.
ACKNOWLEDGEMENTS
This work has been supported by the Spanish Min-
istry of Science and Education under research project
TIN2007-67466-C02-01. The authors thank the
anonymous referees for their suggestions which have
contributed to improve the paper.
REFERENCES
Brucker, P. (2004). Scheduling Algorithms. Springer, 4th
edition.
Brucker, P., Jurisch, B., and Sievers, B. (1994). A branch
and bound algorithm for the job-shop scheduling
problem. Discrete Applied Mathematics, 49:107–127.
Brucker, P. and Knust, S. (2006). Complex Scheduling.
Springer.
Carlier, J. and Pinson, E. (1989). An algorithm for solv-
ing the job-shop problem. Management Science,
35(2):164–176.
Carlier, J. and Pinson, E. (1994). Adjustment of heads and
tails for the job-shop problem. European Journal of
Operational Research, 78:146–161.
Dorndorf, U., Pesch, E., and Phan-Huy, T. (2000). Con-
straint propagation techniques for the disjunctive
scheduling problem. Artificial Intelligence, 122:189–
240.
Dorndorf, U., Pesch, E., and Phan-Huy, T. (2002). Con-
straint propagation and problem descomposition: A
preprocessing procedure for the job shop problem.
Annals of Operations Research, 115:125–142.
Giffler, B. and Thomson, G. L. (1960). Algorithms for solv-
ing production scheduling problems. Operations Re-
search, 8:487–503.
Hart, P., Nilsson, N., and Raphael, B. (1968). A formal
basis for the heuristic determination of minimum cost
paths. IEEE Trans. on Sys. Science and Cybernetics,
4(2):100–107.
Korf, R. (2003). An improved algorithm for optimal bin-
packing. In Proceedings of the 13th International
Conference on Artificial Intelligence (IJCAI03), pages
1252–1258.
Korf, R. (2004). Optimal rectangle packing: New results.
In Proceedings of the 14th International Conference
on Automated Planning and Scheduling (ICAPS04),
pages 132–141.
Nazaret, T., Verma, S., Bhattacharya, S., and Bagchi, A.
(1999). The multiple resource constrained project
scheduling problem: A breadth-first approach. Euro-
pean Journal of Operational Research, 112:347–366.
Nilsson, N. (1980). Principles of Artificial Intelligence.
Tioga, Palo Alto, CA.
Pearl, J. (1984). Heuristics: Intelligent Search strategies for
Computer Problem Solving. Addison-Wesley.
Pohl, I. (1973). The avoidance of relative catastrophe,
heuristic competence, genuine dynamic weigting and
computational issues in heuristic problem solving. In
Proceedings of IJCAI73, pages 20–23.
Sadeh, N. and Fox, M. S. (1996). Variable and value order-
ing heuristics for the job shop scheduling constraint
satisfaction problem. Artificial Intelligence, 86:1–41.
Sierra, M. and Varela, R. (2005). Optimal scheduling with
heuristic best first search. Proceedings of AI*IA’2005,
Lecture Notes in Computer Science, 3673:173–176.
Varela, R. and Soto, E. (2002). Sheduling as heuristic search
with state space reduction. Lecture Notes in Computer
Science, 2527:815–824.
ICSOFT 2008 - International Conference on Software and Data Technologies
280