Figure 7: The predicted positions of the robot represented
with blue disks representing the uncertainty.
Figure 8: The estimated positions of the robot represented
with red points with the ellipse uncertainty with a blue cir-
cle.
servations of the Extended Kalman Filter, the radius
of the ellipse of uncertainty dropped drastically. It is
about 4 cm. Overall, we can clearly see that the use
of our visual feature points decreased the error during
the localization of the robot.
5 CONCLUSIONS
In summary, in this paper we have presented a new
detector descriptor for the extraction of salient visual
features. It has a good repeatability, so the robot
can manage better the visual landmarks during the
SLAM. We aim in the future to make the detector
more robust to the scale changes by convolving the
image with scale spaces before the extraction of sta-
ble corners. Furthermore,we aim doing more experi-
ments on the hardware for testing the algorithms and
comparing them to other SLAM ones. It is also im-
portant to focus on extracting salient feature points
having low dimensions and describing the essence of
the image to improve the frequency of the SLAM run-
ning on the hardware.
REFERENCES
Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2008).
Speeded-up robust features (surf). Comput. Vis. Image
Underst., 110(3):346–359.
Davison, A. J., Reid, I. D., Molton, N. D., and Stasse,
O. (2007). MonoSLAM: Real-time single camera
SLAM. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 26(6):1052–1067.
Freeman, W. T. and Adelson, E. H. (1991). The design
and use of steerable filters. IEEE Trans. Pattern Anal.
Mach. Intell., 13(9):891–906.
Harris, C. and Stephens, M. (1988). A combined corner
and edge detector. In In Proc. of Fourth Alvey Vision
Conference, pages 147–151.
Leutenegger, S., Chli, M., and Siegwart, R. Y. (2011).
Brisk: Binary robust invariant scalable keypoints. In
Proceedings of the 2011 International Conference on
Computer Vision, ICCV ’11, pages 2548–2555, Wash-
ington, DC, USA. IEEE Computer Society.
McCann, S. and Lowe, D. G. (2014). Efficient detection
for spatially local coding. In Computer Vision - ACCV
2014 Workshops - Singapore, Singapore, November 1-
2, 2014, Revised Selected Papers, Part I, pages 615–
629.
Mikolajczyk, K. and Schmid, C. (2005). A performance
evaluation of local descriptors. IEEE Trans. Pattern
Anal. Mach. Intell., 27(10):1615–1630.
Rosten, E. and Drummond, T. (2006). Machine learning for
high-speed corner detection. In Proceedings of the 9th
European Conference on Computer Vision - Volume
Part I, ECCV’06, pages 430–443, Berlin, Heidelberg.
Springer-Verlag.
Roussillon, C., Gonzalez, A., Sol, J., Codol, J.-M.,
Mansard, N., Lacroix, S., and Devy, M. (2012). Rt-
slam: A generic and real-time visual slam implemen-
tation. cite arxiv:1201.5450Comment: 10 pages.
Salas-Moreno, R. F., Newcombe, R. A., Strasdat, H., Kelly,
P. H., and Davison, A. J. (2013). Slam++: Simulta-
neous localisation and mapping at the level of objects.
In The IEEE Conference on Computer Vision and Pat-
tern Recognition (CVPR).
Se, S., Lowe, D. G., and Little, J. J. (2002). Mobile
robot localization and mapping with uncertainty using
scale-invariant visual landmarks. I. J. Robotic Res.,
21(8):735–760.