gio, 2017; Ke et al., 2019). Another example are
Recurrent Independent Mechanisms, a meta-learning
approach which decomposes knowledge in the train-
ing set in modules that can be re-used across tasks
(Goyal et al., 2019; Madan et al., 2021). The selec-
tion of which modules to use for different tasks is
performed by an attention mechanism, while Rein-
forcement Learning mechanisms are responsible for
the process of adaptation to new parameters.
A modular capacity is better achieved in systems
that combine the data-processing capabilities of ML
models with the capacity of abstraction and logical
reasoning of symbolic AI methods. According to sup-
porters of this new paradigm, referred to as the Third
Wave or hybrid AI, statistical models are not enough
to achieve generalisation, we need to teach systems
to handle also logical and symbolic reasoning. This
hybrid approach of symbolic and sub-symbolic meth-
ods allows to hold the advantages of both strategies,
get rid of their respective weaknesses and, at the same
time, program models that fare much better in gener-
alisation and abstraction (Anthony et al., 2017; Ben-
gio et al., 2019; Bonnefon and Rahwan, 2020; Booch
et al., 2020; Garcez and Lamb, 2020; Hill et al., 2020;
Ke et al., 2019; Mao et al., 2019; Moruzzi, 2020). The
benefit of these hybrid models consists in their capac-
ity of combining the computational power of Deep
Learning with symbolic and logical reasoning to not
only be able to process large amounts of data but also
identify which elements within those data stay stable.
4 CONCLUSIONS
The ongoing research presented in this paper con-
tributes to an exhaustive and accurate analysis of
the notion of agency, a useful tool for the investiga-
tion of how to build reliable and flexible decision-
making systems. The study of how the progression
toward generalisation to unknown scenarios happens
and why it is necessary to develop agency helps cre-
ating a deeper theoretical understanding of the char-
acteristics of a robust decision-making process, con-
tributing to address a fundamental issue within AI:
whether and how systems achieve causal agency.
The analysis of the parallel between decision-
making in humans and machines that has been here
presented not only contributes to debates on human
and artificial agency but can also provide relevant in-
sights to research in neuromorphic engineering (Indi-
veri and Sandamirskaya, 2019). Indeed, one of the
challenges in the development of embodied devices
that interact with the environment is the design of so-
lutions through which to generate context-dependent
behaviour, adaptable to changing and unknown con-
ditions.
This paper has identified the ability of sorting and
organising information through frames as a crucial
requisite for agents to build a robust model of their en-
vironment, a model which allows them to adapt and
modify their choices according to the context. The
analysis of the development of agency in decision-
making systems is a preliminary, essential step to
study whether the emulation of biological processes
is a viable path for achieving power-efficient solu-
tions with the aim to build robust and flexible artificial
agents.
REFERENCES
Anthony, T., Tian, Z., and Barber, D. (2017). Thinking fast
and slow with deep learning and tree search. arXiv
preprint, arXiv:1705.08439.
Bengio, Y. (2017). The consciousness prior. arXiv preprint,
arXiv:1709.08568.
Bengio, Y., Deleu, T., Rahaman, N., Ke, R., Lachapelle, S.,
Bilaniuk, O., Goyal, A., and Pal, C. (2019). A meta-
transfer objective for learning to disentangle causal
mechanisms. arXiv preprint, arXiv:1901.10912.
Bertsimas, D. and Thiele, A. (2006). Robust and data-
driven optimization: Modern decision making under
uncertainty. INFORMS TutORials in Operations Re-
search, pages 95–122.
Bonnefon, J.-F. and Rahwan, I. (2020). Machine think-
ing, fast and slow. Trends in Cognitive Sciences,
24(12):1019–1027.
Booch, G., Fabiano, F., Horesh, L., Kate, K., Lenchner, J.,
Linck, N., Loreggia, A., Murugesan, K., Mattei, N.,
Rossi, F., et al. (2020). Thinking fast and slow in AI.
arXiv preprint, arXiv:2010.06002.
de V
´
ericourt, F., Cukier, K., and Mayer-Sch
¨
onberger, V.
(2021). Framers: Human Advantage in an Age of
Technology and Turmoil. Penguin Books Ltd, New
York.
Doyle, P. R., Edwards, J., Dumbleton, O., Clark, L., and
Cowan, B. R. (2019). Mapping perceptions of hu-
manness in speech-based intelligent personal assistant
interaction. arXiv eprint, arXiv:1907.11585.
Garcez, A. d. and Lamb, L. C. (2020). Neurosymbolic AI:
The 3rd wave. arXiv preprint, arXiv:2012.05876.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Ex-
plaining and harnessing adversarial examples. arXiv
preprint arXiv:1412.6572.
Goyal, A., Lamb, A., Hoffmann, J., Sodhani, S., Levine,
S., Bengio, Y., and Sch
¨
olkopf, B. (2019). Re-
current independent mechanisms. arXiv preprint,
arXiv:1909.10893.
Hansen, L. P. and Sargent, T. J. (2011). Robustness. Prince-
ton University Press, Princeton, NJ.
Climbing the Ladder: How Agents Reach Counterfactual Thinking
559