main specific techniques designed to prune the search
space and to direct the search algorithm towards the
goal.
Heuristic search proved to be a very strong do-
main independent technique. For example we can re-
fer to the Fast Forward planning system (Hoffmann
and Nebel, 2001) which uses a heuristic estimate
of distance to the goal or the HSP planner (Bonet
and Geffner, 2001) which can automatically extract
heuristic function from the domain model.
It is known that domain specific information im-
proves efficiency of planners significantly (Haslum
and Scholz, 2003). There are planners that use state
centric domain control knowledge specified in tem-
poral logic (Bacchus and Kabanza, 1999), (Kvarn-
str
¨
om and Magnusson, 2003). Action-centric control
knowledge can be encoded in hierarchical task net-
works (Nau et al., 2003) and it is also possible to au-
tomatically recompile similar kind of control knowl-
edge into PDDL (Baier et al., 2007).
In this paper we focus on two basic search tech-
niques – branch and bound, iterative deepening – in
combination with two domain modeling approaches
that add domain specific information (heuristic func-
tion and control knowledge rules). In particular we
investigate the role of heuristics and control knowl-
edge in the process of search for optimal plans. When
compared to previously mentioned work we use sim-
ple action centric control knowledge in the form of
additional preconditions, which is easy to describe,
and admissible heuristic functions, which compute
the lower bound for plan length
1
. We use the heuris-
tic function in a different way than the A
∗
-based algo-
rithms do. Instead of labeling unvisited states in order
to sort them we use the value of the heuristic function
to prune some branches that can never lead to the goal
because the search would run out of resources first.
We would like to demonstrate that contribution of
control knowledge and heuristic function is not con-
stant during search for the optimal plan in the con-
text of a given search technique. We have performed
a series of experiments in order to investigate if we
can exploit this fact. The most straightforward way to
do this could be saving the time spent to compute the
heuristic function by simply not computing the heuris-
tics when it yields only negligible improvement over
model without heuristics. This might also allow us
to use stronger heuristic functions. Such a function
might slow down the search when computed all the
time but it might help to improve performance if com-
puted in the right moment.
1
Working code example for nomystery domain that uses
control knowledge and heuristic function is available at
http://picat-lang.org/projects.html
The structure of this paper is as follows. Firstly
we will give some background on the automated plan-
ning and the Picat programming language that was
used to conduct the experiments. Then we will intro-
duce three planning domains used in the experiments
together with descriptions of the control knowledge
and heuristic function used. In the fourth section we
will describe and evaluate the experiments performed.
Finally we will discuss the results obtained and draw
some conclusions for possible future work.
2 BACKGROUND
2.1 Automated Planning
Classical AI planning deals with finding a sequence
of actions that change the world from some initial
state to a goal state (Ghallab et al., 2004). We can
see AI planning as the task of finding a path in a di-
rected graph, where nodes describe states of the world
and arcs correspond to state transitions via actions.
Let γ(s,a) describe the state after applying action a to
state s, if a is applicable to s (otherwise the function
is undefined). Then the planning task is to find a se-
quence of actions a
1
,a
2
,..., a
n
called a plan such that,
given the initial state s
0
, for each i ∈ {1, ...,n}, a
i
is
applicable to the state s
i−1
, s
i
= γ(s
i−1
,a
i
), and s
n
is
a final state. For solving cost optimization problems,
a non-negative cost is assigned to each action and the
task is to find a plan with the smallest cost. The major
difference from classical path-finding is that the state
spaces for planning problems are extremely huge and
hence a compact representation of states and actions
(and state transitions) is necessary.
2.2 Picat Planning Module
Picat (Zhou, 2015) is a multi-paradigm logic-based
programming language aimed for general purpose ap-
plications. Aside from its other capabilities the lan-
guage features a built-in planner module with sim-
ple interface which was one of the main reasons why
we chose it to perform our experiments.
User only needs to define the initial state which
is normally a ground Picat term and several predi-
cates. In particular the predicate final(S) that is
used to check whether S is the goal state and predicate
action(S,NextS,Action,ACost), that encodes the
state transition diagram of the planning problem. The
state S can be transformed into NextS by performing
Action. The cost of the action is ACost. If the plan
length is the only interest, then ACost can be set to 1.
Otherwise it should be a non-negative number.
The Benefit of Control Knowledge and Heuristics During Search in Planning
553