
to tackle this issues and to integrate our selection pro-
cedure with other neural reconstruction methods.
REFERENCES
Cao, M., Zheng, L., Jia, W., Lu, H., and Liu, X. (2021). Ac-
curate 3-d reconstruction under iot environments and
its applications to augmented reality. IEEE Transac-
tions on Industrial Informatics, 17(3):2090–2100.
Chen, A., Xu, Z., Geiger, A., Yu, J., and Su, H. (2022).
Tensorf: Tensorial radiance fields. In European Con-
ference on Computer Vision (ECCV).
Deng, K., Liu, A., Zhu, J.-Y., and Ramanan, D. (2022).
Depth-supervised NeRF: Fewer views and faster train-
ing for free. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition
(CVPR).
Furukawa, Y., Curless, B., Seitz, S. M., and Szeliski, R.
(2010). Towards internet-scale multi-view stereo.
In 2010 IEEE computer society conference on com-
puter vision and pattern recognition, pages 1434–
1441. IEEE.
Jain, A., Tancik, M., and Abbeel, P. (2021). Putting nerf
on a diet: Semantically consistent few-shot view syn-
thesis. In Proceedings of the IEEE/CVF International
Conference on Computer Vision (ICCV), pages 5885–
5894.
Kim, M., Seo, S., and Han, B. (2022). Infonerf: Ray en-
tropy minimization for few-shot neural volume ren-
dering. In CVPR.
Ladikos, A., Ilic, S., and Navab, N. (2009). Spectral cam-
era clustering. In 2009 IEEE 12th International Con-
ference on Computer Vision Workshops, ICCV Work-
shops, pages 2080–2086. IEEE.
Liu, L., Gu, J., Lin, K. Z., Chua, T.-S., and Theobalt, C.
(2020). Neural sparse voxel fields. NeurIPS.
Mauro, M., Riemenschneider, H., Signoroni, A., Leonardi,
R., and Van Gool, L. (2014). An integer linear pro-
gramming model for view selection on overlapping
camera clusters. In 2014 2nd International Confer-
ence on 3D Vision, volume 1, pages 464–471. IEEE.
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T.,
Ramamoorthi, R., and Ng, R. (2020). Nerf: Repre-
senting scenes as neural radiance fields for view syn-
thesis. In European conference on computer vision,
pages 405–421. Springer.
M
¨
uller, T., Evans, A., Schied, C., and Keller, A. (2022).
Instant neural graphics primitives with a multiresolu-
tion hash encoding. ACM Transactions on Graphics
(ToG), 41(4):1–15.
Niemeyer, M., Barron, J. T., Mildenhall, B., Sajjadi, M.
S. M., Geiger, A., and Radwan, N. (2022). Regnerf:
Regularizing neural radiance fields for view synthesis
from sparse inputs. In Proc. IEEE Conf. on Computer
Vision and Pattern Recognition (CVPR).
Orsingher, M., Zani, P., Medici, P., and Bertozzi, M.
(2022a). Efficient view clustering and selection for
city-scale 3d reconstruction. In Image Analysis and
Processing–ICIAP 2022: 21st International Confer-
ence, Lecce, Italy, May 23–27, 2022, Proceedings,
Part II, pages 114–124. Springer.
Orsingher, M., Zani, P., Medici, P., and Bertozzi, M.
(2022b). Revisiting patchmatch multi-view stereo for
urban 3d reconstruction. In 2022 IEEE Intelligent Ve-
hicles Symposium (IV), pages 190–196. IEEE.
Pan, X., Lai, Z., Song, S., and Huang, G. (2022). Activen-
erf: Learning where to see with uncertainty estima-
tion. In Computer Vision–ECCV 2022: 17th European
Conference, Tel Aviv, Israel, October 23–27, 2022,
Proceedings, Part XXXIII, pages 230–246. Springer.
Perron, L. and Furnon, V. (2022). Or-tools.
Ramasinghe, S., MacDonald, L. E., and Lucey, S. (2022).
On the frequency-bias of coordinate-mlps. In Ad-
vances in Neural Information Processing Systems.
Reizenstein, J., Shapovalov, R., Henzler, P., Sbordone, L.,
Labatut, P., and Novotny, D. (2021). Common objects
in 3d: Large-scale learning and evaluation of real-life
3d category reconstruction. In International Confer-
ence on Computer Vision.
Roessle, B., Barron, J. T., Mildenhall, B., Srinivasan, P. P.,
and Nießner, M. (2022). Dense depth priors for neural
radiance fields from sparse input views. In Proceed-
ings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR).
Sch
¨
onberger, J. L. and Frahm, J.-M. (2016). Structure-
from-motion revisited. In Conference on Computer
Vision and Pattern Recognition (CVPR).
Seo, S., Han, D., Chang, Y., and Kwak, N. (2023). Mixnerf:
Modeling a ray with mixture density for novel view
synthesis from sparse inputs. In ArXiV.
Sucar, E., Liu, S., Ortiz, J., and Davison, A. (2021). iMAP:
Implicit mapping and positioning in real-time. In Pro-
ceedings of the International Conference on Computer
Vision (ICCV).
Van der Merwe, M., Lu, Q., Sundaralingam, B., Matak,
M., and Hermans, T. (2020). Learning continuous 3d
reconstructions for geometrically aware grasping. In
2020 IEEE International Conference on Robotics and
Automation (ICRA), pages 11516–11522.
Wang, Y., James, S., Stathopoulou, E. K., Beltr
´
an-
Gonz
´
alez, C., Konishi, Y., and Del Bue, A. (2019).
Autonomous 3-d reconstruction, mapping, and ex-
ploration of indoor environments with a robotic arm.
IEEE Robotics and Automation Letters, 4(4):3340–
3347.
Wynn, J. and Turmukhambetov, D. (2023). Diffusionerf:
Regularizing neural radiance fields with denoising dif-
fusion models. In ArXiV.
Yang, J., Pavone, M., and Wang, Y. (2023). Freenerf: Im-
proving few-shot neural rendering with free frequency
regularization. In Proc. IEEE Conf. on Computer Vi-
sion and Pattern Recognition (CVPR).
Yeh, Y.-J. and Lin, H.-Y. (2018). 3d reconstruction and vi-
sual slam of indoor scenes for augmented reality ap-
plication. In 2018 IEEE 14th International Confer-
ence on Control and Automation (ICCA), pages 94–
99.
Yen-Chen, L. (2020). Nerf-pytorch. https://github.com/
yenchenlin/nerf-pytorch/.
Informative Rays Selection for Few-Shot Neural Radiance Fields
261