periment to be conducted, it is obvious (i.e. inserting
a few lines of code between the platform and the bot),
either to reduce variables precision range (e.g., add
noise to the pixelic image) or to randomly draw the
fact that a gesture may succeed or fail (e.g., introduce
spurious command).
A step further, as already implemented as plug-
ins, since all environment elements are available (not
to the bot, but to the experiment software), it would
be possible to design other cues, or more generally
other interactions with the environment. However, in
collaboration with neuroscience experimentalists, we
have carefully selected what seems useful to explore
biological systemic models, and avoided to provide a
too general tool that do anything.
Building one “brainy-bot” is a rather huge task
and requires several high-level cognitive functional-
ity. However, though systemic neuroscience requires
to study the system as a whole, it does not imply that
each functionality has to be studied at the same level
of details. Several blocks may be considered as black-
boxes interacting with the part of the system to be ex-
tensively studied. This is the reason why the present
platform is not limited to a survival environment, but
comes also with middle-ware (presently in develop-
ment) related to the basic cognitive functionality in-
volved in such paradigms, as listed in Appendix 5.
Some modules will thus be implemented according
to a rough description, e.g., via an algorithmic ersatz.
The nervous sub-system under study, on the reverse is
going to be implemented at a very fine scale (neural
network mesoscopic models or even spiking neural
networks).
The key features of this digital experimenta-
tion platform include the capability to perform ex-
periments involving both long-term continuous time
paradigms or short-term decision tasks with a few
time-steps. It also allows us to consider either sym-
bolic motor command or sensory input (e.g., ingest
or not food, detect the presence of a stimulus) or
quantitative gestures and complex trajectory genera-
tion (e.g., find resources in an unknown environment).
A key point is to be able to mimic and repeat at will
experiments performed in neuroscience laboratory on
animals. Here, the obtained computational models are
not only going to “fit the data” but to be explored far
beyond, yielding the possibility to study long-term
adaptation, statistical robustness, etc. Not only one
instance of a bot can be checked, but several parallel
experiments can be run in order to explore different
parameter ranges, or compare alternative models.
It would also be instructive to better understand to
which extent such bio-inspired architectures actually
required to control a biological system could enhance
artificial control rules commonly applied in robotics
or game engines. This is a challenging issue, beyond
the present study, but an interesting perspective of the
present work.
As a conclusion, let us mention that this platform
has already been used for preliminary digital exper-
iments about Pavlovian conditioning (Gorojosky and
Alexandre, 2013) involving the functional modeling
of the amygdala and hippocampus, decision making
mechanisms in link with reinforcing signals yielded
by aversive or appetitive stimuli and internal com-
putation (Beati et al., 2013), plus a student work of
the AGREL connexionist categorization model here
confronted to a realistic environment (Carrere and
Alexandre, 2013).
ACKNOWLEDGMENTS
This work was partly supported by the KEOpS ANR
project. Huge thanks to Nicolas Rougier for precious
advises and Maxime Carrere for his feedback. The
NeBICA’14 review was a real chance to improve the
original draft, thanks.
REFERENCES
Beati, T., Carrere, M., and Alexandre, F. (2013). Which re-
inforcing signals in autonomous systems? In Third In-
ternational Symposium on Biology of Decision Mak-
ing, Paris, France.
Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman,
D., Bower, J. M., Diesmann, M., Morrison, A., Good-
man, P. H., Harris, F. C., Zirpe, M., Natschl
¨
ager, T.,
Pecevski, D., Ermentrout, B., Djurfeldt, M., Lansner,
A., Rochel, O., Vieville, T., Muller, E., Davison, A. P.,
El Boustani, S., and Destexhe, A. (2007). Simulation
of networks of spiking neurons: a review of tools and
strategies. Journal of computational neuroscience,
23(3):349–398.
Carrere, M. and Alexandre, F. (2013).
´
Emergence
de cat
´
egories par interaction entre syst
`
emes
d’apprentissage. In Preux, P. and Tommasi, M.,
editors, Conf
´
erence Francophone sur l’Apprentissage
Automatique (CAP), Lille, France.
Chemla, S., Chavane, F., Vieville, T., and Kornprobst, P.
(2007). Biophysical cortical column model for optical
signal analysis. BMC Neuroscience, 8(Suppl 2):P140.
Cofer, D., Cymbalyuk, G., Reid, J., Zhu, Y., Heitler, W. J.,
and Edwards, D. H. (2010). AnimatLab: a 3D
graphics environment for neuromechanical simula-
tions. Journal of neuroscience methods, 187(2):280–
288.
Davison, A. P., Br
¨
uderle, D., Eppler, J., Kremkow, J.,
Muller, E., Pecevski, D., Perrinet, L., and Yger, P.
(2008). PyNN: A Common Interface for Neuronal
NEUROTECHNIX2014-InternationalCongressonNeurotechnology,ElectronicsandInformatics
160