2 RELATED WORKS
In (Yan et al., 2012), a neural architecture for learn-
ing new navigation strategies of a robot is proposed
based of the observation of human’s movements. In
(Y.Sandamirskaya et al., 2011), a new architecture
for behavioural organization of an agent is proposed.
Since it receives data from sensors and the environ-
ment of the robot, the behaviour is determined with
actions.Also a theoretical formula based on Bayes
rules and neural field is given. The work of (P.Cisek,
2006) is for major importance in the robot grasping
field. Indeed, it gives a theory for object grasping
based on the strategies used for animals to decide
which object to reach and how to do the planning
of the movement. Because these processes stimulate
the same brain regions, (C.Crick, 2010) developed a
model which allow to fix and to plan through some pa-
rameters of the movement. In the area of navigation,
(M.Milford and G.Wyeth, 2010) has given another
view of SLAM by using a biological approach. This
method allows the robot to localize it self in a variable
large environment. (Maja, 1992) proposed a method
which synthesises an artificial robot behaviour of the
robot. This life like strategy should be robust, repeat-
able and adaptive. Concerning the global descriptor
used to characterize images, many works are done. A
survey is given in (Y.Raoui et al., 2010).
3 LOCALIZATION WITH EKF
USING LASER RANGE FINDER
In this section, we are interested to robot localization
in a structured environment using the probabilistic ap-
proach(Y.Raoui et al., 2011). This approach has in-
duced a revolution in robotics since Thrun introduced
it in 95. Indeed, it takes into account the uncertainty in
the movement of the robot. This uncertainty is caused
with many factors like ”slippage, bumping”. Through
the fusion of sensor and motion model, th robot can
correct its positions. In the figure ”ground”, we show
the positions where the robot should be. These states
are of crucial importance because all the steps of fil-
tering are depending on it. The robot has to compare
its noisy position with the ground truth.
3.1 Prediction of the Position
The most important thing to consider in the prediction
phase is the motion model or the model of displace-
ment of the robot. It should be in fact determined
with a probabilistic way, in order to move the mobile
robot. Let’s have the following probability with rely
the robot position from x
t
tox
t+1
with an action u.
p(x
t+1
/x
t
,u)
In order to implement this equation, we use the pre-
diction step of the Kalman filter:
X
t+1
= A.X
t
+ B ∗ u
In this implementation, we consider the robot state
as a couple of the robot mean position and the co-
variance. This distribution will evolute until the robot
ends its path.
The figure 1 shows the movement of the robot in
a structured environment. We consider the state of
the robot represented with (x,y,θ) which simplifies
its estimation. For more complete formulation, we
should integer Euler angles. As shown in the figure,
the robot positions are affected with error which dis-
able the robot to close the loop. This ability is im-
portant for both indoor and outdoor environments. At
the same time, the ellipse of uncertainty grows also
because the predicted covariance is growing. Thus,
a step of correction must be included so as we can
reduce this ellipse of uncertainty, in other words de-
crease the values of the diagonal of the covariance
matrix.
3.2 Correction of the Position
To update the position of the robot we use the ob-
servation computed from Laser range data and GIST
descriptor both as done for the stimuli. Let us have
the function z(x) where x is the robot position and z
is the observation.
We have H =
dz
dx
is the Jacobian which will allow
us to update the robot metrical position using the fol-
lowing equations:
Q
t
= (
σ
2
r
0
0 σ
2
r
)
Where σ
r
is the standard deviation of the robot
motion.
S
t
= H
t
∑
t
∗ H
T
t
+ Q
t
S
t
is the covariance on the predicted measure, which
the noise could be from the monocular camera or the
Laser range finder.
K
t
=
∑
t
∗ H
T
t
∗ S
−1
t
K
t
is the Kalman gain. Q is the error on the robot
position.
The equation of the update of the robot position is:
X
t+1
= X
t
+ K
t
∗ (z
t
− ˆz
t
)
ICINCO2015-12thInternationalConferenceonInformaticsinControl,AutomationandRobotics
140