are taken into account in (Billard et al., 2004), in
which agents learn new goals and how to achieve
them.
In (Chernova and Veloso, 2007) a demonstration-
based learning algorithm (confident execution frame-
work) is used to train an agent. Such a framework al-
lows an agent to learn a policy from demonstration. In
the learning process, the agent observes the execution
of an action. An agent is provided with a decision-
making mechanism that allows it to actively choose
whether observing or acting, with a gradually increas-
ing autonomy. To learn a policy a supervised learning
approach is used and the training data are acquired
from the demonstration. All these approaches still do
not use a relational representation formalism able to
generalize the learned policies.
In this paper we have presented a logic framework
that allows quickly, incrementally and accurately to
train an agent to imitate a demonstrator of the task.
REFERENCES
Atkeson, C. and Schaal, S. (1997). Robot learning from
demonstration. In Fisher, D., editor, Proceedings of
the 14th International Conference on Machine Learn-
ing (ICML), pages 12–20.
Bentivegna, D., Atkeson, C., and Cheng, G. (2004). Learn-
ing from observation and practice using primitives. In
AAAI Fall Symposium Series, ‘Symposium on Real-
life Reinforcement Learning’.
Billard, A., Epars, Y., Calinon, S., Cheng, G., and Schaal,
S. (2004). Discovering Optimal Imitation Strate-
gies. robotics and autonomous systems, Special Issue:
Robot Learning from Demonstration, 47(2-3):69–77.
Billard, A. and Siegwart, R. (2004). Robot learning from
demonstration. Robotics and Autonomous Systems,
47(2-3):65–67.
Bratko, I. (2001). Prolog programming for artificial intelli-
gence, 3rd ed. Addison-Wesley Longman Publishing
Co.
Chernova, S. and Veloso, M. (2007). Confidence-based pol-
icy learning from demonstration using gaussian mix-
ture models. In AAMAS ’07: Proceedings of the 6th
international joint conference on Autonomous agents
and multiagent systems, pages 1–8, New York, NY,
USA. ACM.
Esposito, F., Ferilli, S., Fanizzi, N., Basile, T., and
Di Mauro, N. (2004). Incremental learning and con-
cept drift in inthelex. Intelligent Data Analysis Jour-
nal, Special Issue on Incremental Learning Systems
Capable of Dealing with Concept Drift, 8(3):213–237.
Jansen, B. and Belpaeme, T. (2006). A computational
model of intention reading in imitation. Robotics and
Autonomous Systems, 54(5):394–402.
Jebara, T. and Pentland, A. (2002). Statistical imitative
learning from perceptual data. In Proc. ICDL 02,
pages 191–196.
Lavrac, N. and Dzeroski, S. (1994). Inductive Logic Pro-
gramming: Techniques and Applications. Ellis Hor-
wood, New York.
Meltzoff, A. N. (2007). The ”like me” framework for recog-
nizing and becoming an intentional agent. Acta Psy-
chologica, 124(1):26–43.
Muggleton, S. and De Raedt, L. (1994). Inductive logic
programming: Theory and methods. Journal of Logic
Programming, 19/20:629–679.
Nicolescu, M. N. and Mataric, M. J. (2003). Natural meth-
ods for robot task learning: instructive demonstra-
tions, generalization and practice. In Proceedings
of the second international joint conference on Au-
tonomous agents and multiagent systems (AAMAS03),
pages 241–248. ACM.
Schaal, S. (1999). Is imitation learning the route to
humanoid robots? Trends in cognitive sciences,
3(6):233–242.
Schaal, S., Ijspeert, A., and Billard, A. (2003). Com-
putational approaches to motor learning by imita-
tion. Philosophical Transactions: Biological Sci-
ences, 358(1431):537–547.
Semeraro, G., Esposito, F., and Malerba, D. (1996). Ideal
refinement of datalog programs. In Proietti, M., editor,
Logic Program Synthesis and Transformation, volume
1048 of LNCS, pages 120–136. Springer.
Smart, W. and Kaelbling, L. (2002). Effective rein-
forcement learning for mobile robots. In IEEE In-
ternational Conference on Robotics and Automation
(ICRA), volume 4, pages 3404–3410.
Ullman, J. (1988). Principles of Database and Knowledge-
Base Systems, volume I. Computer Science Press.
Verma, D. and Rao, R. P. N. (2007). Imitation learning us-
ing graphical models. In Kok, J. N., Koronacki, J.,
de M
´
antaras, R. L., Matwin, S., Mladenic, D., and
Skowron, A., editors, 18th European Conference on
Machine Learning, volume 4701 of LNCS, pages 757–
764. Springer.
Wohlschlager, A., Gattis, M., and Bekkering, H. (2003).
Action generation and action perception in imitation:
An instantiation of the ideomotor principle. Philo-
sophical Transaction of the Royal Society of London:
Biological Sciences 358, 1431:501–515.
A LOGIC PROGRAMMING FRAMEWORK FOR LEARNING BY IMITATION
223