available soon . A further development of the mod-
eling system could consist of the addition of a Func-
tional Level. This level would be associated with the
affordances of the environment, perceived by the ro-
bot. According to Gibson (Gibson, 1979), “the af-
fordance of anything is a specific combination of the
properties of its substance and its surface taken with
reference to an animal.” In other words, the term af-
fordance can be understood as the function or role,
perceived by an observer, that an object plays in the
environment. Such functionalities are quickly per-
ceived through vision, and full tridimensional object
models are not always required so that their function-
alities in the environment could be perceived.
Even though a robot had a full tridimensional
model of the environment and information about the
movement of the objects, it wouldn’t have a human-
like scene vision. When human beings (and ani-
mals) observe a scene, they “see” several possibilities
and restrictions (Sloman, 1989), such as possibilities
of acquisition of more information through a change
in the viewpoint and possibilities of reaching a goal
through interaction with objects present in the envi-
ronment. Hence, Gibson’s affordances are closely re-
lated to these possibilities and restrictions. Once the
affordances represent a rich source of information to
understand the environment, it is important to develop
a strategy to identify and extract them from the im-
ages captured by the robot. Then, it is possible that
the observation of people while executing common
tasks reveal some affordances in the environment. For
example, one can assign to the doors of an environ-
ment the affordance “passage.” If the robot could ob-
serve people appearing and disappearing in a specific
region, it would perceive that region as an access to
such an environment.
While the robot is building the map or navigating
based on a map previously built, it is likely that the
robot faces an object or a person in its way. In order
to avoid the collision, it is necessary to develop an ob-
stacle detection algorithm and an obstacle avoidance
strategy based on information that can be extracted
from images. Besides, an environment inhabited by
people is subject to changes in its configuration. If
these changes are not detected by the robot and repre-
sented in the environment model, the map would not
be a correct representation of the environment any-
more. Hence, it is also necessary to develop a method-
ology to detect changes in the environment configura-
tion.
REFERENCES
Appenzeller, G., Lee, J., and Hashimoto, H. (1997). Build-
ing topological maps by looking at people: An exam-
ple of cooperation between intelligent spaces and ro-
bots. Proceedings of the International Conference on
Intelligent Robots and Systems (IROS 1997), 3:1326–
1333.
Arta
ˇ
c, M., Jogan, M., and Leonardis, A. (2002). Mobile
robot localization using an incremental eigenspace
model. Proceedings of the International Conference
on Robotics and Automation (ICRA 2002).
Bennewitz, M., Burgard, W., and Thrun, S. (2002). Using
EM to learn motion behaviors of persons with mo-
bile robots. Proceedings of the International Confer-
ence on Intelligent Robots and Systems (IROS 2002),
1:502–507.
Bennewitz, M., Burgard, W., and Thrun, S. (2003). Adapt-
ing navigation strategies using motions patterns of
people. Proceedings of the International Conference
on Robotics and Automation (ICRA 2003), 2:2000–
2005.
Freitas, R., Santos-Victor, J., Sarcinelli-Filho, M., and
Bastos-Filho, T. (2003). Performance evaluation of
incremental eigenspace models for mobile robot local-
ization. In Proc. IEEE 11th International Conference
on Advanced Robotics (ICAR 2003), pages 417–422.
Fukui, R., Morishita, H., and Sato, T. (2003). Expression
method of human locomotion records for path plan-
ning and control of human-symbiotic robot system
based on spacial existence probability model of hu-
mans. Proceedings of the International Conference
on Robotics and Automation (ICRA 2003).
Gaspar, J., Winters, N., and Santos-Victor, J. (2000).
Vision-based navigation and environmental represen-
tations with an omni-directional camera. IEEE Trans-
actions on Robotics and Automation, 16(6):890–898.
Gibson, J. J. (1979). The Ecological Approach to Visual
Perception. Houghton Mifflin, Boston.
Gracias, N. and Santos-Victor, J. (2000). Underwater
video mosaics as visual navigation maps. VisLab-TR
07/2000 - Computer Vision and Image Understanding,
79(1):66–91.
Gutchess, D., Trajkovi
´
c, M., Cohen-Solal, E., Lyons, D.,
and Jain, A. K. (2001). A background model initial-
ization algorithm for video surveillance. International
Conference on Computer Vision, 1:733–740.
Hall, P., Marshall, D., and Martin, R. (1998). Incremental
eigenanalysis for classification. British Machine Vi-
sion Conference, 14:286–295.
Kruse, E. and Wahl, F. (1998). Camera-based monitor-
ing system for mobile robot guidance. Proceedings
of the International Conference on Intelligent Robots
and Systems (IROS 1998), 2:1248–1253.
Murakami, H. and Kumar, B. (1982). Efficient calculation
of primary images from a set of images. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence,
4(5):511–515.
Sloman, A. (1989). On designing a visual system (towards
a gibsonian computation model of vision). Journal of
Experimental and Theoretical AI, 1(4):289–337.
ICINCO 2005 - ROBOTICS AND AUTOMATION
174