
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T.,
Ramamoorthi, R., and Ng, R. (2021). Nerf: Repre-
senting scenes as neural radiance fields for view syn-
thesis. Communications of the ACM, 65(1):99–106.
Mori, Y., Fukushima, N., Yendo, T., Fujii, T., and Tanimoto,
M. (2009). View generation with 3d warping using
depth information for ftv. Signal Processing: Image
Communication, 24(1-2):65–72.
M
¨
uller, T., Evans, A., Schied, C., and Keller, A. (2022).
Instant neural graphics primitives with a multiresolu-
tion hash encoding. ACM Transactions on Graphics
(ToG), 41(4):1–15.
M
¨
uller, S., and Kranzlm
¨
uller, D. (2021). Dynamic Sensor
Matching for Parallel Point Cloud Data Acquisition.
In 29. International Conference in Central Europe on
Computer Graphics (WSCG), pages 21–30.
M
¨
uller, S., and Kranzlm
¨
uller, D. (2022). Dynamic Sensor
Matching based on Geomagnetic Inertial Navigation.
In 30. International Conference in Central Europe on
Computer Graphics (WSCG).
Munkberg, J., Hasselgren, J., Shen, T., Gao, J., Chen, W.,
Evans, A., M
¨
uller, T., and Fidler, S. (2023). Extract-
ing triangular 3d models, materials, and lighting from
images.
Nazeri, K., Ng, E., Joseph, T., Qureshi, F. Z., and Ebrahimi,
M. (2019). Edgeconnect: Generative image inpainting
with adversarial edge learning.
Nealen, A., Igarashi, T., Sorkine, O., and Alexa, M. (2006).
Laplacian mesh optimization. In Proceedings of the
4th international conference on Computer graphics
and interactive techniques in Australasia and South-
east Asia, pages 381–389.
Park, J. J., Florence, P., Straub, J., Newcombe, R., and
Lovegrove, S. (2019). Deepsdf: Learning continu-
ous signed distance functions for shape representa-
tion. In Proceedings of the IEEE/CVF conference on
computer vision and pattern recognition, pages 165–
174.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
Chanan, G., Killeen, T., Lin, Z., Gimelshein, N.,
Antiga, L., et al. (2019). Pytorch: An imperative style,
high-performance deep learning library. Advances in
neural information processing systems, 32.
Pauly, M., Mitra, N., Wallner, J., Pottmann, H., and Guibas,
L. (2008). Discovering structural regularity in 3d ge-
ometry. ACM transactions on graphics, 27.
Pauly, M., Mitra, N. J., Giesen, J., Gross, M. H., and
Guibas, L. J. (2005). Example-based 3d scan comple-
tion. In Symposium on geometry processing, number
CONF, pages 23–32.
Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M.,
and Geiger, A. (2020). Convolutional occupancy net-
works. In Computer Vision–ECCV 2020: 16th Euro-
pean Conference, Glasgow, UK, August 23–28, 2020,
Proceedings, Part III 16, pages 523–540. Springer.
Polygons, P. (2020). Downtown west modular pack.
https://www.unrealengine.com/marketplace/en-
US/product/6bb93c7515e148a1a0a0ec263db67d5b.
Reiser, C., Peng, S., Liao, Y., and Geiger, A. (2021). Kilo-
nerf: Speeding up neural radiance fields with thou-
sands of tiny mlps. In Proceedings of the IEEE/CVF
International Conference on Computer Vision, pages
14335–14345.
Rock, J., Gupta, T., Thorsen, J., Gwak, J., Shin, D., and
Hoiem, D. (2015). Completing 3d object shape from
one depth image. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 2484–2493.
Rosinol, A., Leonard, J. J., and Carlone, L. (2023). Nerf-
slam: Real-time dense monocular slam with neural ra-
diance fields. In 2023 IEEE/RSJ International Confer-
ence on Intelligent Robots and Systems (IROS), pages
3437–3444. IEEE.
Ross, A. and Doshi-Velez, F. (2018). Improving the ad-
versarial robustness and interpretability of deep neu-
ral networks by regularizing their input gradients. In
Proceedings of the AAAI conference on artificial intel-
ligence, volume 32, pages 1660–1669.
scikit image.org (2022). scikit-image.image processing in
python. https://scikit-image.org/.
Shen, C.-H., Fu, H., Chen, K., and Hu, S.-M. (2012). Struc-
ture recovery by part assembly. ACM Transactions on
Graphics (TOG), 31(6):1–11.
Sipiran, I., Gregor, R., and Schreck, T. (2014). Approximate
symmetry detection in partial 3d meshes. In Computer
Graphics Forum, volume 33, pages 131–140. Wiley
Online Library.
Song, S., Yu, F., Zeng, A., Chang, A. X., Savva, M., and
Funkhouser, T. (2017). Semantic scene completion
from a single depth image. In Proceedings of the IEEE
conference on computer vision and pattern recogni-
tion, pages 1746–1754.
Sorkine, O. and Cohen-Or, D. (2004). Least-squares
meshes. In Proceedings Shape Modeling Applica-
tions, 2004., pages 191–199. IEEE.
Sung, M., Kim, V. G., Angst, R., and Guibas, L. (2015).
Data-driven structural priors for shape completion.
ACM Transactions on Graphics (TOG), 34(6):1–11.
Suvorov, R., Logacheva, E., Mashikhin, A., Remizova, A.,
Ashukha, A., Silvestrov, A., Kong, N., Goka, H., Park,
K., and Lempitsky, V. (2022). Resolution-robust large
mask inpainting with fourier convolutions. In Pro-
ceedings of the IEEE/CVF winter conference on ap-
plications of computer vision, pages 2149–2159.
Tatarchenko, M., Dosovitskiy, A., and Brox, T. (2017). Oc-
tree generating networks: Efficient convolutional ar-
chitectures for high-resolution 3d outputs. In Proceed-
ings of the IEEE international conference on com-
puter vision, pages 2088–2096.
Wang, L., Jin, H., Yang, R., and Gong, M. (2008). Stereo-
scopic inpainting: Joint color and depth completion
from stereo images. In 2008 IEEE Conference on
Computer Vision and Pattern Recognition, pages 1–8.
IEEE.
Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., and Jiang, Y.-
G. (2018a). Pixel2mesh: Generating 3d mesh models
from single rgb images. In Proceedings of the Euro-
pean conference on computer vision (ECCV), pages
52–67.
Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., and
Catanzaro, B. (2018b). High-resolution image synthe-
sis and semantic manipulation with conditional gans.
D-LaMa: Depth Inpainting of Perspective-Occluded Environments
265