COMBINING SELF-MOTIVATION WITH LOGICAL PLANNING AND INFERENCE IN A REWARD-SEEKING AGENT

Daphne Liu, Lenhart Schubert

2010

Abstract

We present our preliminary work on our framework for building self-motivated, self-aware agents that plan continuously so as to maximize long-term rewards. While such agents employ reasoned exploration of feasible sequences of actions and corresponding states, they also behave opportunistically and recover from failure, thanks to their quest for rewards and their continual plan updates. The framework allows for both specific and general (quantified) knowledge, epistemic predicates such as knowing-that and knowing-whether, for incomplete knowledge of the world, for quantitative change, for exogenous events, and for dialogue actions. Question answering and experimental runs are shown for a particular agent ME in a simple world of roads, various objects, and another agent, demonstrating the value of continual, deliberate, reward-driven planning.

References

  1. laborative task learning agent. In Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI 2007).
  2. Davies, N. and Mehdi, Q. (2006). BDI for intelligent agents in computer games. In Proceedings of the 8th International Conference on Computer Games: AI and Mobile Systems (CGAIMS 2006).
  3. Dinerstein, J., Egbert, P., Ventura, D., and Goodrich, M. (2008). Demonstration-based behavior programming for embodied virtual agents. Computational Intelligence, 24(4):235-256.
  4. Ferguson, G. and Allen, J. F. (1998). TRIPS: An integrated intelligent problem-solving assistant. In Proceedings of the 15th National Conference on Artificial Intelligence (AAAI 1998).
  5. Ferrein, E., Fritz, C., and Lakemeyer, G. (2004). On-line decision-theoretic Golog for unpredictable domains. In Proceedings of the 27th German Conference on Artificial Intelligence (KI 2004).
  6. Fikes, R., Hart, P., and Nilsson, N. (1972). Learning and executing generalized robot plans. Artificial Intelligence, 3(4):251-288.
  7. Kaplan, A. N. and Schubert, L. K. (2000). A computational model of belief. Artificial Intelligence, 120(1):119- 160.
  8. Kautz, H. A. and Selman, B. (1992). Planning as satisfiability. In Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI 1992).
  9. Liu, D. H. and Schubert, L. K. (2009). Incorporating planning and reasoning into a self-motivated, communicative agent. In Proceedings of the 2nd Conference on Artificial General Intelligence (AGI 2009).
  10. Morbini, F. and Schubert, L. (2008). Metareasoning as an integral part of commonsense and autocognitive reasoning. In AAAI-08 Workshop on Metareasoning.
  11. Nilsson, N. J. (1984). Shakey the robot. Technical Report 323, AI Center, SRI International.
  12. Sacerdoti, E. D. (1975). A structure for plans and behavior. Technical Report 109, AI Center, SRI International.
  13. Singh, S., Litman, D., Kearns, M., and Walker, M. (2002). Optimizing dialogue management with reinforcement learning: Experiments with the njfun system. Journal of Artificial Intelligence Research, 16:105-133.
  14. Singh, S. P., Barto, A. G., Grupen, R., and Connolly, C. (1994). Robust reinforcement learning in motion planning. In Advances in Neural Information Processing Systems 6, pages 655-662. Morgan Kaufmann.
  15. Tacke, M., Weigel, T., and Nebel, B. (2004). Decisiontheoretic planning for playing table soccer. In Proceedings of the 27th German Conference on Artificial Intelligence (KI 2004).
  16. Vere, S. and Bickmore, T. (1990). A basic agent. Computational Intelligence, 6(1):41-60.
  17. Walker, M. A. (2000). An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artificial Intelligence Research, 12:387-416.
Download


Paper Citation


in Harvard Style

Liu D. and Schubert L. (2010). COMBINING SELF-MOTIVATION WITH LOGICAL PLANNING AND INFERENCE IN A REWARD-SEEKING AGENT . In Proceedings of the 2nd International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-674-022-1, pages 257-263. DOI: 10.5220/0002700602570263


in Bibtex Style

@conference{icaart10,
author={Daphne Liu and Lenhart Schubert},
title={COMBINING SELF-MOTIVATION WITH LOGICAL PLANNING AND INFERENCE IN A REWARD-SEEKING AGENT},
booktitle={Proceedings of the 2nd International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2010},
pages={257-263},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0002700602570263},
isbn={978-989-674-022-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 2nd International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - COMBINING SELF-MOTIVATION WITH LOGICAL PLANNING AND INFERENCE IN A REWARD-SEEKING AGENT
SN - 978-989-674-022-1
AU - Liu D.
AU - Schubert L.
PY - 2010
SP - 257
EP - 263
DO - 10.5220/0002700602570263