omy of these situations and a solution based on agents
cooperation. Self-observation of self-adaptive multi-
agent systems allows to detect learning inaccuracies
in the local models. The local active learning situa-
tions bring improved prediction performance by an-
ticipating ambiguous situations and solving them. All
the behaviors of the cooperative agents are designed
generic in order to be agnostic from the application
domain, the learning technique and the number of di-
mensions. We are currently working in the field of
robotics on multi-articulated robotic arms in order to
learn their inverse kinematics model. Further work
will also focus on the comparison with other learning
techniques.
REFERENCES
Abbas, H., Shaheen, S., Elhoseny, M., Singh, A. K., and
Alkhambashi, M. (2018). Systems thinking for devel-
oping sustainable complex smart cities based on self-
regulated agent systems and fog computing. Sustain-
able Computing: Informatics and Systems, 19:204–
213.
Ashby, W. R. (1956). Cybernetics and requisite variety. An
Introduction to Cybernetics.
Bellas, F., Duro, R. J., Fai
˜
na, A., and Souto, D. (2010). Mul-
tilevel darwinist brain (mdb): Artificial evolution in a
cognitive architecture for real robots. IEEE Transac-
tions on autonomous mental development, 2(4):340–
354.
Boes, J., Nigon, J., Verstaevel, N., Gleizes, M.-P., and Mi-
geon, F. (2015). The self-adaptive context learning
pattern: Overview and proposal. In International and
Interdisciplinary Conference on Modeling and Using
Context, pages 91–104. Springer.
Bondu, A. and Lemaire, V. (2007). Active learning us-
ing adaptive curiosity. In International Conference
on Epigenetic Robotics: Modeling Cognitive Devel-
opment in Robotic Systems.
Cangelosi, A., Schlesinger, M., and Smith, L. B. (2015).
Developmental robotics: From babies to robots. MIT
Press.
Chaput, H. H. (2004). The constructivist learning archi-
tecture: A model of cognitive development for robust
autonomous robots. PhD thesis.
Drescher, G. L. (1991). Made-up minds: a constructivist
approach to artificial intelligence. MIT press.
Ferber, J. (1999). Multi-agent systems: an introduction to
distributed artificial intelligence, volume 1. Addison-
Wesley Reading.
Georg
´
e, J.-P., Gleizes, M.-P., and Camps, V. (2011). Coop-
eration. In Self-organising Software, pages 193–226.
Springer.
Guerin, F. (2011). Learning like a baby: a survey of arti-
ficial intelligence approaches. The Knowledge Engi-
neering Review, 26(2):209–236.
Gu
´
eriau, M., Armetta, F., Hassas, S., Billot, R., and
El Faouzi, N.-E. (2016). A constructivist approach for
a self-adaptive decision-making system: application
to road traffic control. In 2016 IEEE 28th Interna-
tional Conference on Tools with Artificial Intelligence
(ICTAI), pages 670–677. IEEE.
Hastie, T., Tibshirani, R., and Friedman, J. (2009). Unsu-
pervised learning. In The elements of statistical learn-
ing, pages 485–585. Springer.
Holmes, M. P. et al. (2005). Schema learning: Experience-
based construction of predictive action models. In
Advances in Neural Information Processing Systems,
pages 585–592.
Kearns, M. J., Schapire, R. E., and Sellie, L. M. (1994). To-
ward efficient agnostic learning. Machine Learning,
17(2-3):115–141.
Li, Y.-F., Zha, H.-W., and Zhou, Z.-H. (2017). Learning safe
prediction for semi-supervised regression. In Thirty-
First AAAI Conference on Artificial Intelligence.
Lin, S.-B., Guo, X., and Zhou, D.-X. (2017). Distributed
learning with regularized least squares. The Journal
of Machine Learning Research, 18(1):3202–3232.
Mazac, S., Armetta, F., and Hassas, S. (2014). On boot-
strapping sensori-motor patterns for a constructivist
learning system in continuous environments. In Artifi-
cial Life Conference Proceedings 14, pages 160–167.
MIT Press.
Mirolli, M. and Baldassarre, G. (2013). Functions and
mechanisms of intrinsic motivations. In Intrinsically
Motivated Learning in Natural and Artificial Systems,
pages 49–72. Springer.
Nigon, J., Gleizes, M.-P., and Migeon, F. (2016). Self-
adaptive model generation for ambient systems. Pro-
cedia Computer Science, 83:675–679.
Oudeyer, P.-Y., Kaplan, F., and Hafner, V. V. (2007). Intrin-
sic motivation systems for autonomous mental devel-
opment. IEEE transactions on evolutionary computa-
tion, 11(2):265–286.
Pan, S. J. and Yang, Q. (2009). A survey on transfer learn-
ing. IEEE Transactions on knowledge and data engi-
neering, 22(10):1345–1359.
Perotto, F. S. (2013). A computational constructivist model
as an anticipatory learning mechanism for coupled
agent–environment systems. Constructivist Founda-
tions, 9(1):46–56.
Piaget, J. (1976). Piaget’s theory. Springer.
Provost, J., Kuipers, B. J., and Miikkulainen, R. (2006). De-
veloping navigation behavior through self-organizing
distinctive-state abstraction. Connection Science,
18(2):159–172.
Russell, S. J. and Norvig, P. (2016). Artificial intelligence: a
modern approach. Malaysia; Pearson Education Lim-
ited,.
Settles, B. (2009). Active learning literature survey. Techni-
cal report, University of Wisconsin-Madison Depart-
ment of Computer Sciences.
Vernon, D., Von Hofsten, C., and Fadiga, L. (2011).
A roadmap for cognitive development in humanoid
robots, volume 11. Springer Science & Business Me-
dia.
Wolpert, D. H. and Macready, W. G. (1997). No free lunch
theorems for optimization. IEEE transactions on evo-
lutionary computation, 1(1):67–82.
Zantedeschi, V. (2018). A Unified View of Local Learning:
Theory and Algorithms for Enhancing Linear Models.
PhD thesis.
A Local Active Learning Strategy by Cooperative Multi-Agent Systems
413