directional constraint, the error generated in the curv-
ing scene is revised. It may be a reason for the high
improvement rate seen in dataset2.
6.2 Evaluation of Mapping
The detected line segments can be used to make dig-
ital maps. Figure 10 show the results of the digital
map. To produce a quantitative analysis, we judge
whether the generated line segments are inlier or out-
lier. Inlier lines are defined as the perpendicular dis-
tance from the endpoints of the generated lines to
a professional line within 100 mm. Table 2 shows
the rate of inliers in each experiment. The results of
dataset 1 become more accuracy, while that of dataset
2 gets worse. The reason is that dataset 2 has more di-
agonal lines, which are collected to parallel or vertical
lines although it does not. Dataset 1 has many vertical
and parallel lines, so the rate of inliers increases in any
experiment. Figure 11 shows whether the line seg-
ments in the digital map are whether inlier or outlier
in line (S) of dataset 1 . Red lines indicate inlier lines
and green lines indicate outlier lines. Tilted lines in
Figure 11(a) is revised, so they change to green lines
in Figure 11(b).
7 CONCLUSIONS
In this paper, we propose a line-based SLAM consid-
ering the directional distribution of line features in an
urban environment. We regard the directional distri-
bution of road markings as a combination of Gaus-
sian distribution, and define a new constraint to a cost
function of bundle adjustment. In the practical exper-
iment, we prove that the accuracy of SLAM improves
in all cases. Due to our method, the single-camera
SLAM is as accurate as the multi-camera SLAM. In
addition, we make digital maps from the detected line
segments. Tilted lines are revised by our method, but
diagonal lines are badly corrected in some cases. We
will improve our method to apply to other cases.
REFERENCES
Bartoli, A. and Sturm, P. (2005). Structure-from-motion us-
ing lines:representation, triangulation, and bundle ad-
justment. In Computer Vision and Image Understand-
ing. Elsevier.
Coughlan, J. M. and Yuille, A. L. (1999). Manhattan world:
Compass direction from a single image by bayesian
inference. In Computer Vision, 1999. The Proceed-
ings of the Seventh IEEE International Conference on.
IEEE.
Davison, A. J., Reid, I. D., Molton, N. S., and Stasse, O.
(2007). Monoslam: Real-time single camera slam.
In Pattern Analysis and Machine Intelligence, IEEE
Transactions on. IEEE.
Hata, A. and Wolf, D. (2014). Road marking detection us-
ing lidar reflective intensity data and its application
to vehicle localization. In Intelligent Transportation
Systems(ITSC), 2014 IEEE 17th International Confer-
ence on. IEEE.
Hirose, K. and Saito, H. (2012). Fast line description for
line-based slam. In Proceedings of the British Ma-
chine Vision Conference. BMVA.
Koletschka, T., Puig, L., and Daniilidis, K. (2014).
Mevo: Multi-environment stereo visual odometry us-
ing points and lines. In Intelligent Robots and Systems
(IROS), 2014. IEEE.
Madsen, K., Bruun, H., and Tingleff, O. (1999). Methods
for non-linear least squares problems. In Informatics
and Mathematical Modelling, Technical University of
Denmark. Citeseer.
Smith, P., Reid, I. D., and Davison, A. J. (2006). Real-time
monocular slam with straight lines. In Proceedings of
the British Machine Vision Conference. BMVC.
Teramoto, E., Kojima, Y., Meguro, J., and Suzuki, N.
(2012). Development of the “precise” automotive in-
tegrated positioning system and high-accuracy digital
map generation. In R&D Review of Toyota CRDL.
IEEE.
Wu, T. and Ranganathan, A. (2013). Vehicle localization
using road markings. In Intelligent Vehicles Sympo-
sium(IV), 2013 IEEE. IEEE.
Zhou, H., Zou, D., Pei, L., Ying, R., Liu, P., and Yu, W.
(2015). Structslam: Visual slam with building struc-
ture lines. In Vehicular Technology, IEEE Transac-
tions on.
Z.Tao, Bonnifait, P., V.Fremont, and J.Ibanez-Guzman
(2013). Mapping and localization using gps, lane
markings and proprioceptive sensors. In Intelligent
Robots and Systems(IROS), 2013 IEEE/RSJ Interna-
tional Conference on. IEEE.