
Conference on Computer Vision and Pattern Recogni-
tion (CVPR).
Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke,
S., Gall, J., and Stachniss, C. (2021). Towards
3D LiDAR-based semantic scene understanding of
3D point cloud sequences. International Journal of
Robotics Research.
Bradley, A., Li, H., Lark, R., and Dunn, S. (2016). BIM for
infrastructure: An overall review and constructor per-
spective. Automation in Construction, 71:139–152.
Chang, A., Dai, A., Funkhouser, T., Halber, M., Niebner,
M., Savva, M., Song, S., Zeng, A., and Zhang, Y.
(2017). Matterport3D: Learning from RGB-D Data
in Indoor Environments. In International Conference
on 3D Vision (3DV).
Cignoni, P., Rocchini, C., and Scopigno, R. (1998). Metro:
Measuring Error on Simplified Surfaces. Computer
Graphics Forum, 17(2):167–174.
Dai, A., Chang, A. X., Savva, M., Halber, M., Funkhouser,
T., and Niessner, M. (2017). Scannet: Richly-
annotated 3d reconstructions of indoor scenes. In
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition.
Gaydon, C., Daab, M., and Roche, F. (2024). FRACTAL:
An Ultra-Large-Scale Aerial Lidar Dataset for 3D Se-
mantic Segmentation of Diverse Landscapes.
Girardeau-Montaut, D. (2006). D
´
etection de change-
ment sur des donn
´
ees g
´
eom
´
etriques tridimension-
nelles. PhD thesis, T
´
el
´
ecom Paris.
Guo, Y., Li, Y., Ren, D., Zhang, X., Li, J., Pu, L., Ma, C.,
Zhan, X., Guo, J., Wei, M., Zhang, Y., Yu, P., Yang,
S., Ji, D., Ye, H., Sun, H., Liu, Y., Chen, Y., Zhu, J.,
and Liu, H. (2024). LiDAR-Net: A Real-scanned 3D
Point Cloud Dataset for Indoor Scenes.
Hackel, T., Savinov, N., Ladicky, L., Wegner, J. D.,
Schindler, K., and Pollefeys, M. (2017). SE-
MANTIC3D.NET: A NEW LARGE-SCALE POINT
CLOUD CLASSIFICATION BENCHMARK. ISPRS
Annals of the Photogrammetry, Remote Sensing and
Spatial Information Sciences, IV-1/W1:91–98.
Hu, Q., Yang, B., Khalid, S., Xiao, W., Trigoni, N., and
Markham, A. (2021). Towards Semantic Segmen-
tation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges. In IEEE/CVF Confer-
ence on Computer Vision and Pattern Recognition
(CVPR).
Jung, J., Stachniss, C., Ju, S., and Heo, J. (2018). Auto-
mated 3d volumetric reconstruction of multiple-room
building interiors for as-built BIM. Advanced Engi-
neering Informatics, 38:811–825.
Khoshelham, K., D
´
ıaz Vilari
˜
no, L., Peter, M., Kang, Z., and
Acharya, D. (2017). THE ISPRS BENCHMARK ON
INDOOR MODELLING. The International Archives
of the Photogrammetry, Remote Sensing and Spatial
Information Sciences, XLII-2/W7:367–372.
K
¨
olle, M., Laupheimer, D., Schmohl, S., Haala, N., Rot-
tensteiner, F., Wegner, J. D., and Ledoux, H. (2021).
The Hessigheim 3D (H3D) benchmark on semantic
segmentation of high-resolution 3D point clouds and
textured meshes from UAV LiDAR and Multi-View-
Stereo. ISPRS Open Journal of Photogrammetry and
Remote Sensing, 1:100001.
Li, X., Li, C., Tong, Z., Lim, A., Yuan, J., Wu, Y., Tang, J.,
and Huang, R. (2020). Campus3d: A photogrammetry
point cloud benchmark for hierarchical understanding
of outdoor scene. In Proceedings of the 28th ACM
International Conference on Multimedia.
Munoz, D., Bagnell, J. A., Vandapel, N., and Hebert,
M. (2009). Contextual classification with functional
Max-Margin Markov Networks. In IEEE Conference
on Computer Vision and Pattern Recognition.
Pocobelli, D. P., Boehm, J., Bryan, P., Still, J., and Grau-
Bov
´
e, J. (2018). BIM for heritage science: a review.
Heritage Science, 6(1):1–15.
Qian, G., Li, Y., Peng, H., Mai, J., Hammoud, H., Elho-
seiny, M., and Ghanem, B. (2022). Pointnext: Revis-
iting pointnet++ with improved training and scaling
strategies. In Advances in Neural Information Pro-
cessing Systems, volume 35, pages 23192–23204.
Rottensteiner, F., Sohn, G., Jung, J., Gerke, M., Baillard,
C., B
´
enitez, S., and Breitkopf, U. (2012). The IS-
PRS benchmark on urban object classification and 3D
building reconstruction. ISPRS Annals of Photogram-
metry, Remote Sensing and Spatial Information Sci-
ences, I-3.
Roynard, X., Deschaud, J.-E., and Goulette, F. (2018).
Paris-Lille-3D: A large and high-quality ground-truth
urban point cloud dataset for automatic segmentation
and classification. International Journal of Robotics
Research.
Rozenberszki, D., Litany, O., and Dai, A. (2022).
Language-grounded indoor 3d semantic segmentation
in the wild. In Computer Vision – ECCV 2022, pages
125–141.
Serna, A., Marcotegui, B., Goulette, F., and Deschaud, J.-E.
(2014). Paris-rue-Madame Database - A 3D Mobile
Laser Scanner Dataset for Benchmarking Urban De-
tection, Segmentation and Classification Methods:. In
3rd International Conference on Pattern Recognition
Applications and Methods.
Tan, W., Qin, N., Ma, L., Li, Y., Du, J., Cai, G., Yang, K.,
and Li, J. (2020). Toronto-3D: A Large-scale Mobile
LiDAR Dataset for Semantic Segmentation of Urban
Roadways. In IEEE/CVF Conference on Computer
Vision and Pattern Recognition Workshops (CVPRW).
Vallet, B., Br
´
edif, M., Serna, A., Marcotegui, B., and Papar-
oditis, N. (2015). TerraMobilita/iQmulus urban point
cloud analysis benchmark. Computers & Graphics.
Varney, N., Asari, V. K., and Graehling, Q. (2020). DALES:
A Large-scale Aerial LiDAR Data Set for Semantic
Segmentation. In IEEE/CVF Conference on Computer
Vision and Pattern Recognition Workshops (CVPRW).
Wang, J., Sun, W., Shou, W., Wang, X., Wu, C., Chong, H.-
Y., Liu, Y., and Sun, C. (2015). Integrating BIM and
lidar for real-time construction quality control. Jour-
nal of Intelligent & Robotic Systems, 79:417–432.
Wang, P.-S., Liu, Y., Guo, Y.-X., Sun, C.-Y., and Tong, X.
(2017). O-CNN: octree-based convolutional neural
3DSES: An Indoor Lidar Point Cloud Segmentation Dataset with Real and Pseudo-Labels from a 3D Model
715