Authors:
P. Biasutti
1
;
A. Bugeau
2
;
J-F. Aujol
3
and
M. Brédif
4
Affiliations:
1
Univ. Bordeaux, LaBRI, INP, CNRS, UMR 5800, F-33400 Talence, France, Univ. Bordeaux, IMB, INP, CNRS, UMR 5251, F-33400 Talence, France, Univ. Paris-Est, LASTIG GEOVIS, IGN, ENSG, F-94160 Saint-Mandé and France
;
2
Univ. Bordeaux, LaBRI, INP, CNRS, UMR 5800, F-33400 Talence and France
;
3
Univ. Bordeaux, IMB, INP, CNRS, UMR 5251, F-33400 Talence and France
;
4
Univ. Paris-Est, LASTIG GEOVIS, IGN, ENSG, F-94160 Saint-Mandé and France
Keyword(s):
3D Point Cloud, Visibility, Visualization, LiDAR, Dataset, Benchmark.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Device Calibration, Characterization and Modeling
;
Geometry and Modeling
;
Image Enhancement and Restoration
;
Image Formation and Preprocessing
;
Image-Based Modeling
;
Multimodal and Multi-Sensor Models of Image Formation
;
Pattern Recognition
;
Robotics
;
Software Engineering
Abstract:
Estimating visibility in point clouds has many applications such as visualization, surface reconstruction and scene analysis through fusion of LiDAR point clouds and images. However, most current works rely on methods that require strong assumptions on the point cloud density, which are not valid for LiDAR point clouds acquired from mobile mapping systems, leading to low quality of point visibility estimations. This work presents a novel approach for the estimation of the visibility of a point cloud from a viewpoint. The method is designed to be fully automatic and it makes no assumption on the point cloud density. The visibility of each point is estimated by considering its screen-space neighborhood from the given viewpoint. Our results show that our approach succeeds better in estimating the visibility on real-world data acquired using LiDAR scanners. We evaluate our approach by comparing its results to a new manually annotated dataset, which we make available online.