which is a major challenge for both machine learning
and deep learning methods. The proposed pipeline
uses a modification of Deep Neuro-Evolution as a
learning module and FPS for intelligent sampling.
Besides, a novel fitness function is proposed to evalu-
ate the quality of each individual to reveal the elite
solution by applying random mutations of the net-
work parameters for a pre-determined number of gen-
erations. To further improve pipeline efficiency, we
propose to use only 5% of sampled points and uti-
lize them for the learning phase, while the remain-
ing 95% of points’ neighborhood is used to estimate
the pipeline generalization performance. In compar-
ison to the baseline, the pipeline was able to reduce
entropy values regardless of the neighborhood type.
Furthermore, the pipeline has a positive impact on
the normal estimation problem, with spherical neigh-
borhoods that optimize eigenvalues entropy deliver-
ing the best results.
REFERENCES
Bello, S. A., Yu, S., Wang, C., Adam, J. M., and Li, J.
(2020). Deep learning on 3d point clouds. Remote
Sensing, 12(11):1729.
Ben-Shabat, Y. and Gould, S. (2020). Deepfit: 3d surface
fitting via neural network weighted least squares. In
European conference on computer vision, pages 20–
34. Springer.
Ben-Shabat, Y., Lindenbaum, M., and Fischer, A. (2019).
Nesti-net: Normal estimation for unstructured 3d
point clouds using convolutional neural networks. In
Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition, pages 10112–
10120.
Chai, J., Zeng, H., Li, A., and Ngai, E. W. (2021). Deep
learning in computer vision: A critical review of
emerging techniques and application scenarios. Ma-
chine Learning with Applications, 6:100134.
Cirujeda, P., Mateo, X., Dicente, Y., and Binefa, X. (2014).
Mcov: a covariance descriptor for fusion of texture
and shape features in 3d point clouds. In 2014 2nd In-
ternational Conference on 3D Vision, volume 1, pages
551–558. IEEE.
Demantk
´
e, J., Mallet, C., David, N., and Vallet, B. (2011).
Dimensionality based scale selection in 3d lidar point
clouds. In Laserscanning.
Eldar, Y., Lindenbaum, M., Porat, M., and Zeevi, Y. Y.
(1994). The farthest point strategy for progressive im-
age sampling. Proceedings of the 12th IAPR Inter-
national Conference on Pattern Recognition, Vol. 2 -
Conference B: Computer Vision & Image Processing.
(Cat. No.94CH3440-5), pages 93–97 vol.3.
Fiolka, T., St
¨
uckler, J., Klein, D. A., Schulz, D., and
Behnke, S. (2012). Sure: Surface entropy for dis-
tinctive 3d features. In International Conference on
Spatial Cognition, pages 74–93. Springer.
Flint, A., Dick, A., and Van Den Hengel, A. (2007). Thrift:
Local 3d structure recognition. In 9th Biennial Con-
ference of the Australian Pattern Recognition Society
on Digital Image Computing Techniques and Applica-
tions (DICTA 2007), pages 182–188. IEEE.
Glorot, X. and Bengio, Y. (2010). Understanding the diffi-
culty of training deep feedforward neural networks. In
Proceedings of the thirteenth international conference
on artificial intelligence and statistics, pages 249–
256. JMLR Workshop and Conference Proceedings.
Guerrero, P., Kleiman, Y., Ovsjanikov, M., and Mitra, N. J.
(2018). Pcpnet learning local shape properties from
raw point clouds. In Computer Graphics Forum, vol-
ume 37, pages 75–85. Wiley Online Library.
Huang, J. and You, S. (2012). Point cloud matching based
on 3d self-similarity. In 2012 IEEE Computer Society
Conference on Computer Vision and Pattern Recogni-
tion Workshops, pages 41–48. IEEE.
Johnson, A. and Hebert, M. (1998). Surface matching for
object recognition in complex 3-d scenes. to appear in.
Image and Vision Computing.
Lazaros, N., Sirakoulis, G. C., and Gasteratos, A. (2008).
Review of stereo vision algorithms: From software to
hardware. International Journal of Optomechatronics,
2:435 – 462.
Liu, Z., Zhou, S., Suo, C., Yin, P., Chen, W., Wang, H., Li,
H., and Liu, Y.-H. (2019). Lpd-net: 3d point cloud
learning for large-scale place recognition and envi-
ronment analysis. In Proceedings of the IEEE/CVF
International Conference on Computer Vision, pages
2831–2840.
Marton, Z.-C., Pangercic, D., Blodow, N., Kleinehellefort,
J., and Beetz, M. (2010). General 3d modelling of
novel objects from a single view. In 2010 ieee/rsj in-
ternational conference on intelligent robots and sys-
tems, pages 3700–3705. IEEE.
Ming, Y., Meng, X., Fan, C., and Yu, H. (2021). Deep
learning for monocular depth estimation: A review.
Neurocomputing, 438:14–33.
Qi, C. R., Su, H., Mo, K., and Guibas, L. J. (2017a). Point-
net: Deep learning on point sets for 3d classification
and segmentation. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 652–660.
Qi, C. R., Yi, L., Su, H., and Guibas, L. J. (2017b). Point-
net++: Deep hierarchical feature learning on point sets
in a metric space. Advances in neural information pro-
cessing systems, 30.
Rabbani, T., Van Den Heuvel, F., and Vosselmann, G.
(2006). Segmentation of point clouds using smooth-
ness constraint. International archives of photogram-
metry, remote sensing and spatial information sci-
ences, 36(5):248–253.
Royo, S. and Ballesta-Garcia, M. (2019). An overview of
lidar imaging systems for autonomous vehicles. Ap-
plied Sciences, 9:4093.
Rusu, R. B., Blodow, N., Marton, Z. C., and Beetz, M.
(2008a). Aligning point cloud views using persistent
feature histograms. In 2008 IEEE/RSJ international
VISAPP 2023 - 18th International Conference on Computer Vision Theory and Applications
592