Table 3: Length of the step when v
obs
is random and L
obs
=
0.1m.
Step 1 2 3 4 5 6
v
obs
0.14 0.05 0.16 0.10 0.16 0.04
L
step
0.50 0.23 0.37 0.44 0.10 0.50
Figure 5: successful footstep planning when v
obs
is random,
L
obs
= 0.1m.
dynamic obstacles. Our footstep planning tactic is
based on a fuzzy Q-learning concept. The most ap-
pealing interest of our approach is its outstanding ro-
bustness related to the fact that the proposed footstep
planning is operational for both constant and variable
velocity of the obstacle.
Futures works will be focus on the improvement
of our footstep planning strategy:
• First, our actual control strategy does not take into
account the duration of the step. However, this
parameter is very important with dynamic obsta-
cles. Therefore, our goal is to enhance the pro-
posal footstep planning in order to take care about
both the length and the duration of the step,
• Second, in some cases, biped robot can not step
over obstacle: for example when the size of the
obstacle is too large. Consequently, the footstep
planning must be able to propose a path planning
in order to make the robot avoid obstacle.
• Third, in long-term, our goal is to design more
general footstep planning based on both local
footstep planning and global path planning,
• Finally, experimental validation may be consider
on real humanoid robot. But in this case, it is nec-
essary to design the joint trajectories based on the
position of feet.
REFERENCES
M. Hackel. Humanoid Robots: Human-like Machines. I-
Tech Education and Publishing, Vienna, Austria, June
2007 .
A. Carlos, P. Filho. Humanoid Robots: New Developments.
I-Tech Education and Publishing, Vienna, Austria,
June 2007.
Y. Ayza, K. Munawar, M. B. Malik, A. Konno and
M. Uchiyama. A Human-Like Approach to Footstep
Planning. Humanoid Robots, I-Tech Education and
Publishing, Vienna, Austria, June 2007, pp.296–314
J. Chestnutt, J. J. Kuffner. A Tiered Planning Strategy for
Biped Navigation. Int. Conf. on Humanoid Robots
(Humanoids’04), Santa Monica, California, 2004.
K. Sabe, M. Fukuchi, J.Gutmann, T. Ohashi, K. Kawamoto,
and T. Yoshigahara. Obstacle Avoidance and Path
Planning for Humanoid Robots using Stereo Vision.
Int. Conf. on Robotics Automation (ICRA). 2004, 592–
597.
J.J. Kuffner, K. Nishiwaki, S. Kagami, M. Inaba, H. In-
oue. Footstep Planning Among Obstacles for Biped
Robots. Proceedings of IEEE/RSJ Int. Conf. on Intel-
ligent Robots and Systems (IROS), 2001, 500–505.
J.J. Kuffner, K. Nishiwaki, S. Kagami, M. Inaba, H. Inoue.
Online Footstep Planning for Humanoid Robots. Pro-
ceedings of IEEE/RSJ Int. Conf. on Robotics and Au-
tomation (ICRA), 2003, 932–937
J. Chestnutt, M. Lau, G. Cheung, J.J. Kuffner, J. Hodgins, T.
Kanade. Footstep Planning for the Honda Asimo Hu-
manoid. Proceedings of IEEE Int. Conf. on Robotics
Automation (ICRA), 2005, pp. 629-634
C. Watkins, P. Dayan. Q-learning. Machine Learning, 8,
1992, 279–292.
R.S. Sutton, A.G. Barto. Reinforcement Learning: An In-
troduction. MIT Press, Cambridge, MA, 1998.
P. Y. Glorennec. Reinforcement Learning: an Overview. Eu-
ropean Symposium on Intelligent Techniques (ESIT),
2000,17–35.
P.Y. Glorennec, L. Jouffe. Fuzzy Q-Learning Proc. of
FUZZ-IEEE’97, Barcelona, 1997.
L. Jouffe. Fuzzy inference system learning by reinforce-
ment methods. IEEE Trans. on SMC, Part C, August
1998, Vol. 28 (3).
ICINCO 2008 - International Conference on Informatics in Control, Automation and Robotics
188