method steps. To demonstrate the correct perfor-
mance of our method we have done a whole set of
experiments in both virtual environments at different
positions. The Figure 9 shows the positions where the
images were captured in each environment to carry
out the experiments. We use a total of 14 positions
with 20 images in each position.
In the Figure 10, the results of these experiments
can be observed. The red line shows the magnitude of
translation upwards and the blue line shows the mag-
nitude of translation downwards. We can observe that
these experiments demonstrate that the method is very
linear for values of relative height around 1 meter.
6 CONCLUSIONS
In this work a method to estimate the height of the
robot has been presented. This method uses omnidi-
rectional images and transforms them with the Radon
transform to make the descriptors of each image. Fur-
thermore it compares the descriptors and finally es-
timates the relative height of the robot. Taking into
account the changes that Radon transforms of scenes
suffer when the robot changes its height.
The experiments included in this paper use our
own image database created synthetically from two
different environments. The results demonstrate that
the method is able to estimate the relative height be-
tween two images with robustness and linearity.
The method is invariant to rotation with respect to
the floor plane because the POC comparison is invari-
ant to shifts of the Radon transform.
The results of this work encourage us to continue
this research line. It will be interesting to do the
experiments with real images and also with images
which have noise, occlusions or changes in lighting
conditions. Furthermore we think that the study of
movements in 6 degrees of freedom will be interest-
ing to investigate.
ACKNOWLEDGEMENTS
This work has been supported by the Spanish govern-
ment through the project DPI2013-41557-P.
REFERENCES
Amor
´
os, F., Pay
´
a, L., Reinoso, O., and Valiente, D. (2014).
Towards relative altitude estimation in topological
navigation tasks using the global appearance of visual
information. VISAPP 2014, International Conference
on Computer Vision Theory and Applications, 1:194–
201.
Bay, H., Tuytelaars, T., and Gool, L. (2006). Surf: Speeded
up robust features. Computer Vision at ECCV 2006,
3951:404–417.
Chang, C., Siagian, C., and Itti, L. (2010). Mobile
robot vision navigation and localization using gist and
saliency. IROS 2010, Int. Con on Intelligent Robots
and Systems, pages 4147–4154.
Hasegawa, M. and Tabbone, S. (2011). A shape descrip-
tor combining logarithmic-scale histogram of radon
transform and phase-only correlation function. In
Document Analysis and Recognition (ICDAR), 2011
International Conference on, pages 182–186.
Hoang, T. and Tabbone, S. (2010). A geometric invari-
ant shape descriptor based on the radon, fourier, and
mellin transforms. In Pattern Recognition (ICPR),
2010 20th International Conference on, pages 2085–
2088.
Kobayashi, K., Aoki, T., Ito, K., Nakajima, H., and Higuchi,
T. (2004). A fingerprint matching algorithm us-
ing phase-only correlation. IEICE Transactions on
Funda- mentals of Electronics, Communications and
Computer Sciences, pages 682–691.
Kuglin, C. and Hines, D. (1975). The phase correlation
image alignment method. In Proceedings of the IEEE,
International Conference on Cybernetics and Society,
pages 163–165.
Lowe, D. (1999). Object recognition from local scale-
invariant features. ICCV 1999, Int. Con. on Computer
Vision, 2:1150–1157.
Mondragon, I., Olivares-M
´
ended, M., Campoy, P.,
Mart
´
ınez, C., and Mejias, L. (2010). Unmanned aerial
vehicles uavs attitude, height, motion estimation and
control using visual systems. Autonomous Robots,
29:17–34.
Oppenheim, A. and Lim, J. (1981). The importance of
phase in signals. Proceedings of the IEEE, 69(5):529–
541.
Pay
´
a, L., Fern
´
andez, L., Gil, L., and Reinoso, O. (2010).
Map building and monte carlo localization using
global appearance of omnidirectional images. Sen-
sors, 10(12):11468–11497.
Radon, J. (1917). Uber die bestimmung von funktio-
nen durch ihre integralwerte langs gewisser mannig-
faltigkeiten. Berichte Sachsische Akademie der Wis-
senschaften, 69(1):262–277.
Winters, N., Gaspar, J., Lacey, G., and Santos-Victor, J.
(2000). Omni-directional vision for robot navigation.
IEEE Workshop on Omnidirectional Vision, pages 21–
28.
RelativeHeightEstimationusingOmnidirectionalImagesandaGlobalAppearanceApproach
209