algorithms if they do not have strategies such as the
trap strategy (John, Prakash, & Chaudhari, 2008). The
maps AR0514SR and AR0417SR have dead-ends or
blind alleys and thus make it more difficult to find an
escape route, leading to lower target performance on
these maps.
The results clearly show that the new target
algorithm, MPTM, performs far better in all of these
simulations on the gaming maps used for
benchmarking.
With the better algorithms, pursuing agents
sometimes fail to catch the targets although these are
outnumbered. They might catch one target but fail to
catch the other, or keep following the target, or end in
a deadlock until timeout.
The result of this study indicates that the MPTM
algorithm exceeds expectations and, on average over
all maps, shows a success rate 9% and 15% better
then Minimax and SF, respectively.
5 CONCLUSION AND FUTURE
WORK
The aim of this paper was to develop and study a
target algorithm in MAPF problems. There have been
many interesting studies done on search algorithms
and among of them are solutions to the MAPF
frameworks. Only several studies have been
conducted on target algorithms, especially in the area
where there are multiple targets.
The investigation shows that TrailMax is a
successful algorithm for control of targets if extended
for dealing with multiple agents. This study proposes
amendments, first time, to TrailMax strategy
algorithm as a state-of-the-art approach modifying
and enhancing its scope to meet the criteria for
multiple targets in the dynamic environment.
This new MPTM algorithm is applicable to
moving targets and the success of having a smart
method makes fast pursuing search algorithms to
timeout. The results of this study demonstrate that the
target algorithms are equally important in comparison
to pursuer algorithms, and that makes the search more
challenging and interesting.
Future studies should also look at computation
time and how it could be improved. A more
systematic approach would also study how the
algorithm behaves on different testbeds and with
different agent configurations, including those with a
larger number of players. The comparison with other
pursuing multi-agent algorithms would be useful.
REFERENCES
Bulitko, V., & Sturtevant, N. (2006). State abstraction for
real-time moving target pursuit: A pilot study.
Proceedings of AAAI Workshop on Learning for
Search, WS-06-11. pp. 72-79.
Chouhan, S. S., & Niyogi, R. (2017). DiMPP: A complete
distributed algorithm for multi-agent path planning.
Journal of Experimental & Theoretical Artificial
Intelligence, 29(6), 1129-1148.
Goldenberg, M., Kovarsky, A., Wu, X., & Schaeffer, J.
(2003). Multiple agents moving target search. IJCAI
International Joint Conference on Artificial
Intelligence, pp. 1536-1538.
Isaza, A., Lu, J., Bulitko, V., & Greiner, R. (2008). A cover-
based approach to multi-agent moving target pursuit.
Proceedings of the 4th Artificial Intelligence and
Interactive Digital Entertainment Conference, AIIDE
2008, pp. 54-59.
Ishida, T. (1992). Moving target search with intelligence.
Proceedings Tenth National Conference on Artificial
Intelligence, pp. 525-532.
John, T. C. H., Prakash, E. C., & Chaudhari, N. S. (2008).
Strategic team AI path plans: Probabilistic pathfinding.
International Journal of Computer Games Technology,
2008, 1-6.
Koenig, S., & Likhachev, M. (2002). D* lite. Proceedings
of the National Conference on Artificial Intelligence,
pp. 476-483.
Li, J., Gange, G., Harabor, D., Stuckey, P. J., Ma, H., &
Koenig, S. (2020). New techniques for pairwise
symmetry breaking in multi-agent path finding.
Proceedings International Conference on Automated
Planning and Scheduling, pp. 193-201.
Loh, P. K. K., & Prakash, E. C. (2009). Novel moving target
search algorithms for computer gaming. Computers in
Entertainment, 7(2), 27:1-27:16.
Moldenhauer, C., & Sturtevant, N. R. (2009). Evaluating
strategies for running from the cops. IJCAI
International Joint Conference on Artificial
Intelligence, pp. 584-589.
Panait, L., & Luke, S. (2005). Cooperative multi-agent
learning: The state of the art. Autonomous Agents and
Multi-Agent Systems, 11(3), 387-434.
Pellier, D., Fiorino, H., & Métivier, M. (2014). Planning
when goals change: A moving target search approach.
12th International Conference on Advances in
Practical Applications of Heterogeneous Multi-Agent
Systems: The PAAMS Collection, 8473. pp. 231-243.
Sharon, G., Stern, R., Felner, A., & Sturtevant, N. R.
(2015). Conflict-based search for optimal multi- agent
pathfinding. Artificial Intelligence, 219, 40-66.
Sigurdson, D., Bulitko, V., Yeoh, W., Hernández, C., &
Koenig, S. (2018). Multi-agent pathfinding with real-
time heuristic search. Paper presented at the 14th IEEE
Conference on Computational Intelligence and Games
(CIG), 2018-August. pp. 1-8.
Silver, D. (2005). Cooperative pathfinding. Proceedings of
the First AAAI Conference on Artificial Intelligence