Authors:
Simon Le Gloannec
1
;
Abdel Illah Mouaddib
1
and
François Charpillet
2
Affiliations:
1
GREYC UMR 6072, Université de Caen Basse-Normandie, France
;
2
MAIA team, LORIA, France
Keyword(s):
MDP, Resource-bounded Reasoning, Hierarchical control, Autonomous agents, Planning and Scheduling.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Decision Support Systems
;
Formal Methods
;
Informatics in Control, Automation and Robotics
;
Intelligent Control Systems and Optimization
;
Planning and Scheduling
;
Simulation and Modeling
;
Software Agents for Intelligent Control Systems
;
Symbolic Systems
Abstract:
This paper address the problem of an autonomous rover that have limited consumable resources to accomplish a mission. The robot has to cope with limited resources: it must decide the resource among to spent at each mission step. The resource consumption is also uncertain. Progressive processing is a meta level reasoning model particulary adapted for this kind of mission. Previous works have shown how to obtain an optimal resource consumption policy using a Markov decision process (MDP). Here, we make the assumption that the mission can dynamically change during execution time. Therefore, the agent must adapt to the current situation, in order to save resources for the most interesting future tasks. Because of the dynamic environment, the agent cannot calculate a new optimal policy online. However, it is possible to compute an approximate value function. We will show that the robot will behave as good as if it knew the optimal policy.