rithm. However, this process can be made simpler
by the inclusion of generic strategies that can be ap-
plied equally to any environment. The inclusion of a
generic way to predict opponent goals and strategies,
calculate trust values and decide what deals to accept,
based on the knowledge base of the President, would
simplify the process of developing an efficient agent
even more.
An even larger step in obtaining a truly generic
system would be the inclusion of some form of ab-
stract understanding about the rules of the game being
played and the board state, which could be defined by
the developer using a formal language such as the one
used in the Zillions Of Games software (Corporation,
2016) or in the General Game Playing project (Gene-
sereth et al., 2005). With this capability it would be
possible to have a system that could generate agents
able to play and negotiate in many different types of
negotiation games, by simply providing it with a file
containing an abstract description of the game.
The agents implemented during the course of this
work, while generally efficient, could also be im-
proved. One major improvement to AlphaDip could
be to allow the agent to search for and negotiate move-
ment commitments for several rounds ahead instead
of only the current round. In the case of AlphaWolf
and the Werewolves of Miller’s Hollow server imple-
mented, a key improvement would be the capability
for AlphaWolf to use strategies involving bluffing, by
for example making opponents believe it has a differ-
ent role than its true role, a strategy human players
frequently use in the game. If correctly implemented,
this ability could make AlphaWolf much more effec-
tive, especially when playing with human opponents.
ACKNOWLEDGEMENTS
We wish to thank Andr
´
e Ferreira and Dave de Jonge
for their previous work in this area, upon which our
work is based, as well as always being available to
talk and help whenever we needed.
REFERENCES
Calhamer, A. B. (2000). The Rules of Diplomacy. Avalon
Hill, 4th edition.
Corporation, Z. D. (2016). Zillions of Games.
http://www.zillions-of-games.com/. Accessed: 23-11-
2016.
de Jonge, D. (2015). Negotiations over Large Agree-
ment Spaces. Phd thesis, Universitat Auton
`
oma de
Barcelona.
des Palli
`
eres, P. and Marly, H. (2009). Werewolves of
Miller’s Hollow: The Village. Lui-m
ˆ
eme.
Drogoul, A. (1995). When ants play chess (or can strate-
gies emerge from tactical behaviours?). In Castel-
franchi, C. and M
¨
uller, J.-P., editors, From Reaction to
Cognition: 5th European Workshop on Modelling Au-
tonomous Agents in a Multi-Agent World, MAAMAW
’93 Neuch
ˆ
atel, Switzerland, August 25–27, 1993 Se-
lected Papers, pages 11–27. Springer.
Fabregues, A. and Sierra, C. (2011). DipGame: A challeng-
ing negotiation testbed. Engineering Applications of
Artificial Intelligence, 24(7):1137–1146.
Ferreira, A. (2014). Dipblue: a diplomacy agent with strate-
gic and trust reasoning. Master thesis, Universidade
do Porto.
Ferreira, A., Lopes Cardoso, H., and Reis, L. P. (2015).
Strategic negotiation and trust in diplomacy – the dip-
blue approach. In Nguyen, N. T., Kowalczyk, R., Du-
val, B., van den Herik, J., Loiseau, S., and Filipe, J.,
editors, Transactions on Computational Collective In-
telligence XX, pages 179–200. Springer International
Publishing, Cham.
Genesereth, M., Love, N., and Pell, B. (2005). General
Game Playing: Overview of the AAAI Competition.
AI Magazine, 26(2):62–72.
Heinrich, J. and Silver, D. (2016). Deep reinforce-
ment learning from self-play in imperfect-information
games. In Proceedings of the Third Deep Reinforce-
ment Learning Workshop, NIPS-DRL.
Johansson, S. and Olsson, F. (2005). Mars – a multi-agent
system playing risk. In Proceedings of Pacific Rim
International Workshop on Multi-agents (PRIMA).
Johansson, S. J. and H
˚
a
˚
ard, F. (2005). Tactical Coordina-
tion in No-press Diplomacy. In Proceedings of the
Fourth International Joint Conference on Autonomous
Agents and Multiagent Systems, AAMAS ’05, pages
423–430, New York, NY, USA. ACM.
Kraus, S., Gan, R., and Lehmann, D. (1995). Designing and
Building a Negotiating Automated Agent. Computa-
tional Intelligence, 11(972):132–171.
Kraus, S., Lehmann, D., and Ephrati, E. (1989). An auto-
mated diplomacy player. In Levy, D. and Beal, D.,
editors, Heuristic Programming in Artificial Intelli-
gence: The 1st Computer Olympiad, pages 136–153.
Ellis Horwood Limited, Chinester, UK.
Nash, J. (1951). Non-Cooperative Games. The Annals of
Mathematics, 54(2):286–295.
Norman, D. (2013). David Norman’s DumbBot.
http://www.daide.org.uk/index.php?title=DumbBot
Algorithm. Accessed: 12-07-2013.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L.,
van den Driessche, G., Schrittwieser, J., Antonoglou,
I., Panneershelvam, V., Lanctot, M., Dieleman, S.,
Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I.,
Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel,
T., and Hassabis, D. (2016). Mastering the game of
go with deep neural networks and tree search. Nature,
529:484–503.
ICAART 2017 - 9th International Conference on Agents and Artificial Intelligence
118