ploration. This can be explained by the fact that the
scenario is already relatively efficient. Nevertheless,
this remains an interesting result because the system
has learned to produce behavior that is almost as ef-
fective as an ad hoc scenario.
5 CONCLUSIONS AND
PERSPECTIVES
The evolution of technologies now enables to con-
sider that artificial systems will face more and more
complex and dynamic environments where they must
perform more and more various tasks. As the variety
of environments and tasks is increasing, these systems
need to constantly adapt their behavior to maintain the
adequacy of their interactions with their environment.
This requirement to constantly learn from the inter-
action with the environment will be a key component
of Ambient Systems, notably because of their socio-
technological aspect. Indeed, the notion of task in
Ambient Systems is ambiguous and depends of its
users, which makes them interact with human users.
The incapacity to specify a priori all the interactions
that can occur in these systems, combined with the
high dynamics of those types of environment impedes
an ad hoc design. On the contrary, these artificial sys-
tems have to constantly learn, through their own ex-
periences, to interact with their environment.
In this paper, we present our use of the SACL pat-
tern to design artificial systems with Lifelong Learn-
ing capacities. It proposes to design artificial sys-
tems in which a model is dynamically built by ex-
perience. This model is both exploited and enriched
by the mechanism that uses the model to decide of its
behavior.
This experiment illustrates how a model can be both
built and exploited in real-time. The simulation we
performed also shows that our approach is suitable
for both supervised and reinforcement learning ap-
proaches.
The work introduced in this paper is currently being
deployed in the neOCampus initiative which intends
to transform the University of Toulouse into a smart
living lab. This deployment will allow real use-cases
applications and comparative analysis to evaluate the
benefits of our approach.
ACKNOWLEDGEMENTS
This work is partially funded by the Midi-
Pyrenees region, within the neOCampus initiative
(www.irit.fr/neocampus/) and supported by the Uni-
versity of Toulouse.
REFERENCES
Argall, B. D., Chernova, S., Veloso, M., and Browning, B.
(2009). A survey of robot learning from demonstra-
tion. Robotics and autonomous systems, 57(5):469–
483.
Billard, A. (2003). Robota: Clever toy and educational tool.
Robotics and Autonomous Systems, 42(3):259–269.
Boes, J., Migeon, F., Glize, P., and Salvy, E. (2014).
Model-free Optimization of an Engine Control Unit
thanks to Self-Adaptive Multi-Agent Systems (reg-
ular paper). In International Conference on Em-
bedded Real Time Software and Systems (ERTS2),
Toulouse, 05/02/2014-07/02/2014, pages 350–359.
SIA/3AF/SEE.
Boes, J., Nigon, J., Verstaevel, N., Gleizes, M.-P., and Mi-
geon, F. (2015). The self-adaptive context learning
pattern: Overview and proposal. In Modeling and Us-
ing Context, pages 91–104. Springer.
Georg
´
e, J.-P., Gleizes, M.-P., and Camps, V. (2011). Co-
operation. In Di Marzo Serugendo, G., Gleizes,
M.-P., and Karageogos, A., editors, Self-organising
Software, Natural Computing Series, pages 7–32.
Springer Berlin Heidelberg.
Guivarch, V., Camps, V., and Pninou, A. (2012).
AMADEUS: an adaptive multi-agent system to learn a
user’s recurring actions in ambient systems. Advances
in Distributed Computing and Artificial Intelligence
Journal, Special Issue n3, Special Issue n3(ISSN:
2255-2863):(electronic medium).
Huang, C.-M. (2010). Joint attention in human-robot inter-
action. Association for the Advancement of Artificial
Intelligence.
Jazdi, N. (2014). Cyber physical systems in the context
of industry 4.0. In Automation, Quality and Test-
ing, Robotics, 2014 IEEE International Conference
on, pages 1–4. IEEE.
Knox, W. B. and Stone, P. (2009). Interactively shaping
agents via human reinforcement: The tamer frame-
work. In Proceedings of the Fifth International Con-
ference on Knowledge Capture, pages 9–16. ACM.
Mitchell, T. M. (2006). The discipline of machine learn-
ing, volume 9. Carnegie Mellon University, School of
Computer Science, Machine Learning Department.
Nehaniv, C. L. and Dautenhahn, K. (2002). The correspon-
dence problem. Imitation in animals and artifacts, 41.
Niewiadomski, R., Hofmann, J., Urbain, J., Platt, T., Wag-
ner, J., Piot, B., Cakmak, H., Pammi, S., Baur, T.,
Dupont, S., et al. (2013). Laugh-aware virtual agent
and its impact on user amusement. In Proceedings
of the 2013 international conference on Autonomous
agents and multi-agent systems, pages 619–626. In-
ternational Foundation for Autonomous Agents and
Multiagent Systems.
Lifelong Machine Learning with Adaptive Multi-Agent Systems
285