TRANSFER LEARNING FOR MULTI-AGENT COORDINATION

Peter Vrancx, Yann-Michaël De Hauwere, Ann Nowé

2011

Abstract

Transfer learning leverages an agent’s experience in a source task in order to improve its performance in a related target task. Recently, this technique has received attention in reinforcement learning settings. Training a reinforcement learning agent on a suitable source task allows the agent to reuse this experience to significantly improve performance on more complex target problems. Currently, reinforcement learning transfer approaches focus almost exclusively on speeding up learning in single agent systems. In this paper we investigate the potential of applying transfer learning to the problem of agent coordination in multi-agent systems. The idea underlying our approach is that agents can determine how to deal with the presence of other agents in a relatively simple training setting. By then generalizing this knowledge, the agents can use this experience to speed up learning in more complex multi-agent learning tasks.

References

  1. Banerjee, B. and Stone, P. (2007). General game learning using knowledge transfer. In The 20th International Joint Conference on Artificial Intelligence, pages 672-677.
  2. Birattari, M., Stutzle, T., Paquete, L., and Varrentrapp, K. (2002). A racing algorithm for configuring metaheuristics. In Genetic and Evolutionary Computation Conference (GECCO), volume 2, pages 11-18. SIGEVO.
  3. Claus, C. and Boutilier, C. (1998). The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 746-752. AAAI Press.
  4. Cohen, W. (1995). Fast effective rule induction. In International Conference on Machine Learning, pages 115- 123. Morgan Kaufmann.
  5. De Hauwere, Y.-M., Vrancx, P., and Nowé, A. (2010). Learning multi-agent state space representations. In The 9th International Conference on Autonomous Agents and Multiagent Systems, pages 715-722, Toronto, Canada.
  6. Greenwald, A. and Hall, K. (2003). Correlated-q learning. In AAAI Spring Symposium, pages 242-249. AAAI Press.
  7. Guestrin, C., Lagoudakis, M., and Parr, R. (2002). Coordinated reinforcement learning. In International Conference on Machine Learning, pages 227-234.
  8. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., and Witten, I. H. (2009). The weka data mining software: An update. SIGKDD Explorations, 11(1).
  9. Hu, J. and Wellman, M. (2003). Nash q-learning for general-sum stochastic games. Journal of Machine Learning Research, 4:1039-1069.
  10. Kok, J., Hoen, P., Bakker, B., and Vlassis, N. (2005). Utile coordination: Learning interdependencies among cooperative agents. In Proc. Symp. on Computational Intelligence and Games, pages 29-36.
  11. Kok, J. and Vlassis, N. (2004). Sparse cooperative Qlearning. In Proceedings of the twenty-first international conference on Machine learning, page 61. ACM.
  12. Kuhlmann, G. and Stone, P. (2007). Graph-based domain mapping for transfer learning in general games. In European Conference on Machine Learning (ECML), pages 188-200. Springer.
  13. Lazaric, A. (2008). Knowledge Transfer in Reinforcement Learning. PhD thesis, Politecnico di Milano.
  14. Littman, M. (1994). Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the eleventh international conference on machine learning, volume 157, page 163.
  15. Melo, F. and Veloso, M. (2009). Learning of coordination: exploiting sparse interactions in multiagent systems. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent SystemsVolume 2, pages 773-780.
  16. Taylor, M. and Stone, P. (2009). Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10:1633-1685.
  17. Taylor, M. E. (2008). Autonomous Inter-Task Transfer in Reinforcement Learning Domains. PhD thesis, University of Texas at Austin.
  18. Taylor, M. E. and Stone, P. (2007). Cross-domain transfer for reinforcement learning. In Proceedings of the Twenty-Fourth International Conference on Machine Learning.
  19. Tsitsiklis, J. (1994). Asynchronous stochastic approximation and q-learning. Journal of Machine Learning, 16(3):185-202.
  20. Watkins, C. (1989). Learning from Delayed Rewards. PhD thesis, University of Cambridge.
Download


Paper Citation


in Harvard Style

Vrancx P., De Hauwere Y. and Nowé A. (2011). TRANSFER LEARNING FOR MULTI-AGENT COORDINATION . In Proceedings of the 3rd International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-8425-41-6, pages 263-272. DOI: 10.5220/0003185602630272


in Bibtex Style

@conference{icaart11,
author={Peter Vrancx and Yann-Michaël De Hauwere and Ann Nowé},
title={TRANSFER LEARNING FOR MULTI-AGENT COORDINATION},
booktitle={Proceedings of the 3rd International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2011},
pages={263-272},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003185602630272},
isbn={978-989-8425-41-6},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 3rd International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - TRANSFER LEARNING FOR MULTI-AGENT COORDINATION
SN - 978-989-8425-41-6
AU - Vrancx P.
AU - De Hauwere Y.
AU - Nowé A.
PY - 2011
SP - 263
EP - 272
DO - 10.5220/0003185602630272