and translation vector t=[119.4117 127.9851 0]. The
alignment root mean square error of the fused data is
approximately 3.2 mm. Many range data fusion ex-
periments have been performed with different objects
in the environment. Fig 4(a) shows the different kind
of objects place in the plane and its accurate fused 3D
terrain model is shown in Fig 4(e). Again the rectan-
gular aluminum log is placed in plane as shown in Fig
4(b) and its fused terrain model is shown in Fig 4(f).
Similarly, now we place the plywood board that has
15 rectangular log and different objects in the plane
as Fig. 4(c-d). Fig. 4 (g-h) shows the accurately fused
3D model of the terrain. The resulting fused surface
shows, the proposed method is applied to accurately,
realistically and rapidly represent the real-world envi-
ronment.
4 CONCLUSIONS
In this paper, we have presented a new approach for
range data fusion from two heterogeneous range scan-
ners (i.e. Laser range scanner and Microsoft Kinect)
in order to integrate the merits of both scanners for
the generation of accurate, realistic surface of the ter-
rain. First, we have presented a new framework of
the GMM using convex relaxation approach for seg-
mentation of RGB-D images having inhomogeneous
intensity. After transforming both the range data to
common reference frame, we have applied the ICP
algorithm to align these range data. The alignment
method of two different range data in a common refer-
ence frame is much faster than directly apply the only
ICP algorithm to their scanner coordinate system. In
the fusion process, we have selected the coarser de-
tailed region from Kinect and finer region from Laser
scanner. The fused surface of the terrain is recon-
structed using the Delaunay triangulation algorithm.
In this way, we have generated a seamless integra-
tion of the terrain surface from two overlapping range
data. The experimental results have shown the accu-
rate 3D model of terrain from fused range data. The
alignment rms error of the fused data has approxi-
mately 3-5 mm. So the main contribution of this pa-
per is to present a range data fusion approach that
eliminates the limitation of both the range sensors and
generate the accurate surface modeling of rough and
highly unstructured terrain.
REFERENCES
An, S.-Y., Lee, L.-K., and Oh, S.-Y. (2012). Fast incremen-
tal 3d plane extraction from a collection of 2d line
segments for 3d mapping. In Intelligent Robots and
Systems (IROS), 2012 IEEE/RSJ International Con-
ference on, pages 4530–4537. IEEE.
Elseberg, J., Magnenat, S., Siegwart, R., and N¨uchter,
A. (2012). Comparison of nearest-neighbor-search
strategies and implementations for efficient shape
registration. Journal of Software Engineering for
Robotics, 3(1):2–12.
Herrera, C., Kannala, J., Heikkil¨a, J., et al. (2012). Joint
depth and color camera calibration with distortion cor-
rection. Pattern Analysis and Machine Intelligence,
IEEE Transactions on, 34(10):2058–2064.
Johnson, A. E. and Manduchi, R. (2002). Probabilistic
3d data fusion for adaptive resolution surface genera-
tion. In 3D Data Processing Visualization and Trans-
mission, International Symposium on, pages 578–578.
IEEE Computer Society.
Jolliffe, I. (2005). Principal component analysis. Wiley
Online Library.
Khoshelham, K. and Elberink, S. O. (2012). Accuracy and
resolution of kinect depth data for indoor mapping ap-
plications. Sensors, 12(2):1437–1454.
Kl¨aß, J., St¨uckler, J., and Behnke, S. (2012). Efficient mo-
bile robot navigation using 3d surfel grid maps. In
Robotics; Proceedings of ROBOTIK 2012; 7th Ger-
man Conference on, pages 1–4. VDE.
Lai, K., Bo, L., Ren, X., and Fox, D. (2011). A large-
scale hierarchical multi-view rgb-d object dataset. In
Robotics and Automation (ICRA), 2011 IEEE Interna-
tional Conference on, pages 1817–1824. IEEE.
Li, C., Kao, C.-Y., Gore, J. C., and Ding, Z. (2008). Min-
imization of region-scalable fitting energy for image
segmentation. Image Processing, IEEE Transactions
on, 17(10):1940–1949.
Newcombe, R. A., Davison, A. J., Izadi, S., Kohli, P.,
Hilliges, O., Shotton, J., Molyneaux, D., Hodges,
S., Kim, D., and Fitzgibbon, A. (2011). Kinectfu-
sion: Real-time dense surface mapping and tracking.
In Mixed and augmented reality (ISMAR), 2011 10th
IEEE international symposium on, pages 127–136.
IEEE.
Rockafellar, R. T. (1997). Convex analysis. Number 28.
Princeton university press.
Singh, M. K., Venkatesh, K., and Dutta, A. (2014). Accu-
rate 3d terrain modeling by range data fusion from two
heterogeneous range scanners. In India Conference
(INDICON), 2014 Annual IEEE, pages 1–6. IEEE.
Trevor, A. J., Rogers, J., and Christensen, H. I. (2012).
Planar surface slam with 3d and 2d sensors. In
Robotics and Automation (ICRA), 2012 IEEE Inter-
national Conference on, pages 3041–3048. IEEE.
Wang, J., Ju, L., and Wang, X. (2009). An edge-weighted
centroidal voronoi tessellation model for image seg-
mentation. Image Processing, IEEE Transactions on,
18(8):1844–1858.
Wheeler, M. D., Sato, Y., and Ikeuchi, K. (1998). Con-
sensus surfaces for modeling 3d objects from multiple
range images. In Computer Vision, 1998. Sixth Inter-
national Conference on, pages 917–924. IEEE.
Zhang, Z. (2012). Microsoft kinect sensor and its effect.
MultiMedia, IEEE, 19(2):4–10.
RangeDataFusionforAccurateSurfaceGenerationfromHeterogeneousRangeScanners
449