DYNAMIC GOAL COORDINATION IN PHYSICAL AGENTS

Jose Antonio Martin H., Javier de Lope

Abstract

A general framework for the problem of coordination of multiple competing goals in dynamic environments for physical agents is presented. This approach to goal coordination is a novel tool to incorporate a deep coordination ability to pure reactive agents. The framework is based on the notion of multi-objective optimization. We propose a kind of “aggregating functions” formul−ation with the particularity that the aggregation is weighted by means of a dynamic weighting unitary vector ω (S) which is dependant on the system dynamic state allowing the agent to dynamically coordinate the priorities of its single goals. This dynamic weighting unitary vector is represented as a set of n − 1 angles. The dynamic coordination must be established by means of a mapping between the state of the agent’s environment S to the set of angles Φi (S) using any sort of machine learning tool. In this work we investigate the use of Reinforcement Learning as a first approach to learn that mapping.

References

  1. Albus, J. (1975). A new approach to manipulator control: The cerebellar model articulation controller (cmac). J. of Dynamic Sys., Meas. and Control, pages 220-227.
  2. Fonseca, C. M. and Fleming, P. J. (1995). An overview of evolutionary algorithms in multiobjective optimization. Evolutionary Computation, 3(1):1-16.
  3. Isaacs, R. (1999). Differential Games. Dover Publications.
  4. Passino, K. (2005). Biomimicry for Optimization, Control, and Automation. Springer Verlag.
  5. Sutton, R. (2006). Reinforcement learning and artificial intelligence. http://rlai.cs.ualberta.ca/RLAI/rlai.html.
  6. Sutton, R. and Barto, A. (1998). Reinforcement Learning, An Introduction. MIT Press.
  7. Sutton, R. S. (1996). Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Touretzky, D. S., editor, Adv. in Neural Inf. Proc. Systems, volume 8, pages 1038-1044. MIT Press.
  8. Zitzler, E., Laumanns, M., Thiele, L., and Fonseca, C. (2002). Why quality assessment of multiobjective optimizers is difficult. In Proc. GECCO 2002, pages 666-674.
Download


Paper Citation


in Harvard Style

Antonio Martin H. J. and de Lope J. (2006). DYNAMIC GOAL COORDINATION IN PHYSICAL AGENTS . In Proceedings of the Third International Conference on Informatics in Control, Automation and Robotics - Volume 1: ICINCO, ISBN 978-972-8865-59-7, pages 154-159. DOI: 10.5220/0001216401540159


in Bibtex Style

@conference{icinco06,
author={Jose Antonio Martin H. and Javier de Lope},
title={DYNAMIC GOAL COORDINATION IN PHYSICAL AGENTS},
booktitle={Proceedings of the Third International Conference on Informatics in Control, Automation and Robotics - Volume 1: ICINCO,},
year={2006},
pages={154-159},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0001216401540159},
isbn={978-972-8865-59-7},
}


in EndNote Style

TY - CONF
JO - Proceedings of the Third International Conference on Informatics in Control, Automation and Robotics - Volume 1: ICINCO,
TI - DYNAMIC GOAL COORDINATION IN PHYSICAL AGENTS
SN - 978-972-8865-59-7
AU - Antonio Martin H. J.
AU - de Lope J.
PY - 2006
SP - 154
EP - 159
DO - 10.5220/0001216401540159