i) Ours ii) GT iii) Our reconstruction iv) GT reconstruction
Figure 9: Upsampling results of sparse point cloud (input =
625 points and output = 25000 points).
In future work we will also extend the network to in-
clude colored point clouds and estimate the color for
each upsampled point. Another direction of future
work is looking into shape completion to fill in holes
in the input point cloud, which often result from real
world scans.
ACKNOWLEDGEMENTS
The underlying research of these results has been par-
tially funded by the Free State of Thuringia with the
number 2015 FE 9108 and co-financed by the Euro-
pean Union as part of the European Regional Devel-
opment Fund (ERDF).
REFERENCES
Ben-Shabat, Y., Lindenbaum, M., and Fischer, A. (2018).
Nesti-net: Normal estimation for unstructured 3d
point clouds using convolutional neural networks.
arXiv preprint arXiv:1812.00709.
Berger, M., Levine, J. A., Nonato, L. G., Taubin, G., and
Silva, C. T. (2013). A benchmark for surface recon-
struction. ACM Trans. Graph., 32(2):20:1–20:17.
Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M.,
Ganovelli, F., and Ranzuglia, G. (2008). MeshLab:
an Open-Source Mesh Processing Tool. In Scarano,
V., Chiara, R. D., and Erra, U., editors, Eurographics
Italian Chapter Conference. The Eurographics Asso-
ciation.
Fan, H., Su, H., and Guibas, L. J. (2016). A point set gen-
eration network for 3d object reconstruction from a
single image. CoRR, abs/1612.00603.
Fan, H., Su, H., and Guibas, L. J. (2017). A point set gener-
ation network for 3d object reconstruction from a sin-
gle image. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 605–
613.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In
Advances in neural information processing systems,
pages 2672–2680.
Guerrero, P., Kleiman, Y., Ovsjanikov, M., and Mitra, N. J.
(2018). PCPNet: Learning local shape properties
from raw point clouds. Computer Graphics Forum,
37(2):75–85.
H
¨
ane, C., Tulsiani, S., and Malik, J. (2019). Hierarchical
surface prediction. TPAMI.
Huang, H., Li, D., Zhang, H., Ascher, U., and Cohen-Or, D.
(2009). Consolidation of unorganized point clouds for
surface reconstruction. ACM SIGGRAPH Asia 2009
papers on - SIGGRAPH Asia ’09, page 1.
Huang, H., Wu, S., Gong, M., Cohen-Or, D., Ascher, U.,
and Zhang, H. R. (2013). Edge-aware point set resam-
pling. ACM Transactions on Graphics, 32(1):1–12.
Kingma, D. P. and Ba, J. (2014). Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Li, R., Li, X., Fu, C.-W., Cohen-Or, D., and Heng, P.-A.
(2019). Pu-gan: a point cloud upsampling adversarial
network. In IEEE International Conference on Com-
puter Vision (ICCV).
Lipman, Y., Cohen-Or, D., Levin, D., and Tal-Ezer, H.
(2007). Parameterization-free projection for geome-
try reconstruction. ACM Transactions on Graphics,
26(3):22.
Mandikal, P. and Babu, R. V. (2019). Dense 3d point cloud
reconstruction using a deep pyramid network. In Win-
ter Conference on Applications of Computer Vision
(WACV).
Maturana, D. and Scherer, S. (2015). Voxnet: A 3d con-
volutional neural network for real-time object recog-
nition. In 2015 IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS), pages 922–
928. IEEE.
Qi, C. R., Su, H., Mo, K., and Guibas, L. J. (2016). Pointnet:
Deep learning on point sets for 3d classification and
segmentation. arXiv preprint arXiv:1612.00593.
Qi, C. R., Yi, L., Su, H., and Guibas, L. J. (2017). Point-
net++: Deep hierarchical feature learning on point sets
in a metric space. In Guyon, I., Luxburg, U. V., Ben-
gio, S., Wallach, H., Fergus, R., Vishwanathan, S., and
Garnett, R., editors, Advances in Neural Information
Processing Systems 30, pages 5099–5108. Curran As-
sociates, Inc.
Qian, Y., Hou, J., Kwong, S., and He, Y. (2020). Pugeo-
net: A geometry-centric network for 3d point cloud
upsampling. arXiv, pages arXiv–2002.
Riegler, G., Ulusoy, A. O., Bischof, H., and Geiger, A.
(2017a). Octnetfusion: Learning depth fusion from
data. In Proceedings of the International Conference
on 3D Vision.
Riegler, G., Ulusoy, A. O., and Geiger, A. (2017b). Octnet:
Learning deep 3d representations at high resolutions.
In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition.
Rubner, Y., Tomasi, C., and Guibas, L. J. (2000). The earth
mover’s distance as a metric for image retrieval. In-
ternational journal of computer vision, 40(2):99–121.
VISAPP 2021 - 16th International Conference on Computer Vision Theory and Applications
78