defined (e.g. using the shortest path to move between
locations).
We have compared the performance of the algo-
rithm to a slightly modified exhaustive max
n
search,
showing that despite examining only a small fraction
of the game tree (less than 0.001% for the look-ahead
of six game moves), the goal-based search is still
able to find an optimal solution in 88.1% cases; fur-
thermore, even the suboptimal solutions produced are
very close to the optimum. This results have been ob-
tained with the background knowledge designed be-
fore implementing and evaluating the algorithm and
without further optimization to prevent over-fitting.
Furthermore, we have tested the scalability of
the algorithm to larger scenarios where the modified
max
n
search cannot be applied. We have confirmed
that although the algorithm cannot overcome the ex-
ponential growth, this growth controllable by reduc-
ing the number of different goals a unit can pursue
and by making the action sequences generated by
goals longer. Simulations on a real-world scenario
modelled as an multi-player asymmetric game proved
the approach viable, though further optimizations
and more improved background knowledge would be
needed for the algorithm to discover complex strate-
gies.
An important feature of the proposed approach
is its compatibility with all existing extensions of
general-sum game tree search based on modified
value back-up procedure and other optimizations. It
is also insensitive to the granularity of space and time
with which a game is modelled as long as the structure
of the goals remains the same and their decomposition
into low-level actions is scaled correspondingly.
In future research, we aim to implement additional
technical improvements in order to make the goal-
based search applicable to even larger problems. In
addition, we would like to address the problem of
the automatic extraction of goal-based background
knowledge from game histories. First, we will learn
goal initiation conditions for individual players and
use them for additional search space pruning. Sec-
ond, we will address a more challenging problem of
learning the goal decomposition algorithms.
ACKNOWLEDGEMENTS
Effort sponsored by the Air Force Office of
Scientific Research, USAF, under grant number
FA8655-07-1-3083 and by the Research Programme
No.MSM6840770038 by the Ministry of Education of
the Czech Republic. The U.S. Government is autho-
rized to reproduce and distribute reprints for Govern-
ment purpose notwithstanding any copyright notation
thereon.
REFERENCES
Billings, D., Davidson, A., Schauenberg, T., Burch, N.,
Bowling, M., Holte, R. C., Schaeffer, J., and Szafron,
D. (2004). Game-tree search with adaptation in
stochastic imperfect-information games. In van den
Herik, H. J., Bjrnsson, Y., and Netanyahu, N. S., edi-
tors, Computers and Games, volume 3846 of Lecture
Notes in Computer Science, pages 21–34. Springer.
Carmel, D. and Markovitch, S. (1996). Learning and us-
ing opponent models in adversary search. Technical
Report CIS9609, Technion.
Kovarsky, A. and Buro, M. (2005). Heuristic search applied
to abstract combat games. In Canadian Conference on
AI, pages 66–78.
Luckhardt, C. and K.B.Irani (1986). An algorithmic solu-
tion of n-person games. In Proc. of the National Con-
ference on Artificial Intelligence (AAAI-86), Philadel-
phia, Pa., August, pages 158–162.
Mock, K. J. (2002). Hierarchical heuristic search techniques
for empire-based games. In IC-AI, pages 643–648.
Sailer, F., Buro, M., and Lanctot, M. (2007). Adversarial
planning through strategy simulation. In IEEE Sym-
posium on Computational Intelligence and Games
(CIG), pages 80–87, Honolulu.
Schaeffer, J. (1989). The history heuristic and alpha-
beta search enhancements in practice. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence,
11(11):1203–1212.
Smith, S. J. J., Nau, D. S., and Throop, T. A. (1998). Com-
puter bridge - a big win for AI planning. AI Magazine,
19(2):93–106.
Stilman, B., Yakhnis, V., and Umanskiy, O. (2007). Ad-
versarial Reasoning: Computational Approaches to
Reading the Opponent’s Mind, chapter 3.3. Strategies
in Large Scale Problems, pages 251–285. Chapman
& Hall/CRC.
van Riemsdijk, M. B., Dastani, M., and Winikoff, M.
(2008). Goals in agent systems: A unifying frame-
work. In Padgham, Parkes, Mller, and Parsons, ed-
itors, Proc. of 7th Int. Conf. on Autonomous Agents
and Multiagent Systems (AAMAS 2008), volume Es-
toril, Portugal, pages 713–720.
Willmott, S., Richardson, J., Bundy, A., and Levine, J.
(2001). Applying adversarial planning techniques to
Go. Theoretical Computer Science, 252(1–2):45–82.
ICAART 2009 - International Conference on Agents and Artificial Intelligence
60