controllers is therefore limited to relatively small roll
and pitch angles, while in the case of fast maneuvers,
it is not guaranteed. Machine learning techniques
have been successful in learning models based on data
from human pilots (Abbeel et al., 2007), in improving
performance of control using reinforcement learning
(Lupashin et al., 2010), and exploring aggressive ma-
neuvers such as fast translation and back flip (Purwin
and D’Andrea, 2009; Gillula et al., 2011). In con-
trast to these prior efforts, the approach proposed here
does not simply replicate the control of human oper-
ators, rather, it is based on the premise that through
self-learning, the robot can create its own representa-
tion of its body and vision system and, upon a period
of training, can learn to perform such maneuvers on
its own. More specifically, we demonstrate the ap-
plication of an artificial neural network representa-
tion of the robot’s vision and attitude systems capa-
ble of controlling the robot to a target landing site.
The property of ANN to provide control laws through
implicit inversion of the robot’s kinematic chain or in
our case the non-linear transformation between vision
system’s output and the robot’s attitude and position
vectors is one of the main advantages of the proposed
ANN-based approach. The specific type of ANN ex-
plored here is a self-organizing map (SOM) where ar-
tificial neurons participate in a competitive learning
process, allowing the network to ”discover” the body
of the robot they describe. This process known as
body schema, originally developed in the field of cog-
nitive robotics, is a cornerstone of the proposed effort.
The concept of a body schema was first conceived
by Head and Holmes (Head and Holmes, 1911) who
studied how human perceive their bodies. Their defi-
nition of body schema is a postural model of the body
and its surface which is formed by combining infor-
mation from proprioceptive, somatosensory and vi-
sual sensors. According to their theory, the brain uses
this model to register the location of sensation on the
body and control its movements. A classical example
supporting the notion of body schema is the phantom
limb syndrome, where amputees report sensations or
pain from their amputated limb (Melzack, 1990; Ra-
machandran and Rogers-Ramachandran, 1996). Re-
cent brain imaging studies have indeed confirmed that
body schema is encoded in particular regions of the
primate and human brains (Berlucchi and Aglioti,
1997; Graziano et al., 2000) along with body move-
ments (Berthoz, 2000; Graziano et al., 2002). More
importantly, it is now apparent that the body schema is
not static, and can be modified dynamically to include
or ”extend” the body during use to tools (Iriki et al.,
1996) or when wearing a prosthetic limb (Tsukamoto,
2000). These and other advances of cognitive neuro-
science have led to the development of novel robot
control schemes.
The pliability of body schemas is one of the main
reasons a growing number of roboticists are explor-
ing the use of various schemas, i.e. motor, tactile,
visual in designing adaptable robots, capable of ac-
quiring knowledge of themselves and their environ-
ment. Recent experiments in cognitive developmental
robotics have demonstrated that using tactile and vi-
sion sensors, a robot could learn its body schema (im-
age) through babbling in front of a camera viewing its
arms, and subsequently, using a trained neuronal net-
work representing its motion scheme, acquire an im-
age of its invisible face through Hebbian self-learning
(Fuke et al., 2007). Yet another study demonstrated an
ability of a robot to extend its body schema to include
a tool (a stick), without a need to re-learn its forward
kinematics, rather, a simple shift in the sensory field
(schema) of the robot was sufficient to reproduce the
task of reaching a particular point in space with the
stick (Stoytchev, 2003).
In this paper we extend the computational ap-
proach introduced by Morasso (Morasso and San-
guineti, 1995) which creates a link between the
robot’s configuration and sensor spaces utilizing a
self-organizing map (SOM). In the case of an MAV, in
addition to the vision system, the MAV sensor space
includes vehicle’s pitch angle. The trained network is
then used to create a mapping between the configura-
tion and sensor spaces, thus presenting a self-learned
body schema. A unique feature of the approach is
that the robot control task does not require the use of
inverse kinematics, i.e. prediction of the robots’ po-
sition and orientation in the global Cartesian space.
Instead, through the use of a pseudo potential fields
defined in the sensor space , the MAV is controlled to
the desired landing position and orientation using an
implicit inversion of the non-linear mapping between
configuration and sensor spaces. These features of the
proposed control scheme are illustrated in a 3-DOF
planar MAV model described in the subsequent sec-
tions of this paper. In order to implement the proposed
approach it is required the fusion of inertial and visual
information, as demonstrated through simulations in
this paper.
2 SELF-ORGANIZING BODY
SCHEMA OF MAV-S
2.1 3-DOF MAV Model
The quadrotor is modeled as a 3D free-moving (trans-
ICINCO2015-12thInternationalConferenceonInformaticsinControl,AutomationandRobotics
26