
provide both better reconstruction quality and inter-
pretable insights into the reconstruction process.
Supplementary Material. For more plots and
scene-wise comparisons, please refer to the following
github repository:
https://github.com/akashjorss/Con3DMax.
REFERENCES
Asano, Y. M., Rupprecht, C., and Vedaldi, A. (2019). Self-
labelling via simultaneous clustering and representa-
tion learning. arXiv preprint arXiv:1911.05371.
Barron, J. T., Mildenhall, B., Tancik, M., Hedman, P.,
Martin-Brualla, R., and Srinivasan, P. P. (2021). Mip-
nerf: A multiscale representation for anti-aliasing
neural radiance fields. In Proceedings of the
IEEE/CVF International Conference on Computer Vi-
sion, pages 5855–5864.
Barron, J. T., Mildenhall, B., Verbin, D., Srinivasan, P. P.,
and Hedman, P. (2023). Zip-nerf: Anti-aliased grid-
based neural radiance fields. In Proceedings of the
IEEE/CVF International Conference on Computer Vi-
sion, pages 19697–19705.
Cerkezi, L. and Favaro, P. (2024). Sparse 3d reconstruc-
tion via object-centric ray sampling. In 2024 Inter-
national Conference on 3D Vision (3DV), pages 432–
441. IEEE.
Chen, Y., Xu, H., Zheng, C., Zhuang, B., Pollefeys, M.,
Geiger, A., Cham, T.-J., and Cai, J. (2025). Mvsplat:
Efficient 3d gaussian splatting from sparse multi-view
images. In European Conference on Computer Vision,
pages 370–386. Springer.
Dudzik, A. J. and Veli
ˇ
ckovi
´
c, P. (2022). Graph neural net-
works are dynamic programmers. Advances in neural
information processing systems, 35:20635–20647.
Eldar, Y., Lindenbaum, M., Porat, M., and Zeevi, Y. Y.
(1997). The farthest point strategy for progressive im-
age sampling. IEEE transactions on image process-
ing, 6(9):1305–1315.
Ester, M., Kriegel, H.-P., Sander, J., Xu, X., et al. (1996).
A density-based algorithm for discovering clusters in
large spatial databases with noise. In kdd, volume 96,
pages 226–231.
Fan, Z., Cong, W., Wen, K., Wang, K., Zhang, J., Ding, X.,
Xu, D., Ivanovic, B., Pavone, M., Pavlakos, G., et al.
(2024). Instantsplat: Unbounded sparse-view pose-
free gaussian splatting in 40 seconds. arXiv preprint
arXiv:2403.20309.
Fridovich-Keil, S., Meanti, G., Warburg, F. R., Recht, B.,
and Kanazawa, A. (2023). K-planes: Explicit radiance
fields in space, time, and appearance. In Proceedings
of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 12479–12488.
Gao, K., Gao, Y., He, H., Lu, D., Xu, L., and Li, J. (2022).
Nerf: Neural radiance field in 3d vision, a comprehen-
sive review. arXiv preprint arXiv:2210.00379.
Goli, L., Reading, C., Sell
´
an, S., Jacobson, A., and
Tagliasacchi, A. (2024). Bayes’ rays: Uncertainty
quantification for neural radiance fields. In Proceed-
ings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 20061–20070.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Isler, S., Sabzevari, R., Delmerico, J., and Scaramuzza,
D. (2016). An information gain formulation for ac-
tive volumetric 3d reconstruction. In 2016 IEEE In-
ternational Conference on Robotics and Automation
(ICRA), pages 3477–3484. IEEE.
Jain, A., Tancik, M., and Abbeel, P. (2021). Putting nerf
on a diet: Semantically consistent few-shot view syn-
thesis. In Proceedings of the IEEE/CVF International
Conference on Computer Vision, pages 5885–5894.
Jin, L., Chen, X., R
¨
uckin, J., and Popovi
´
c, M. (2023).
Neu-nbv: Next best view planning using uncer-
tainty estimation in image-based neural rendering. In
2023 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems (IROS), pages 11305–11312.
IEEE.
Kerbl, B., Kopanas, G., Leimk
¨
uhler, T., and Drettakis,
G. (2023). 3d gaussian splatting for real-time radi-
ance field rendering. ACM Transactions on Graphics,
42(4):1–14.
Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C.,
Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C.,
Lo, W.-Y., et al. (2023). Segment anything. In Pro-
ceedings of the IEEE/CVF International Conference
on Computer Vision, pages 4015–4026.
Kopanas, G. and Drettakis, G. (2023). Improving nerf
quality by progressive camera placement for free-
viewpoint navigation.
Lai, X., Yue, D., Hao, J.-K., Glover, F., and L
¨
u, Z. (2023).
Iterated dynamic neighborhood search for packing
equal circles on a sphere. Computers & Operations
Research, 151:106121.
Lee, K., Gupta, S., Kim, S., Makwana, B., Chen, C.,
and Feng, C. (2023). So-nerf: Active view planning
for nerf using surrogate objectives. arXiv preprint
arXiv:2312.03266.
Lombardi, S., Simon, T., Saragih, J., Schwartz, G.,
Lehrmann, A., and Sheikh, Y. (2019). Neural vol-
umes: Learning dynamic renderable volumes from
images. arXiv preprint arXiv:1906.07751.
Marza, P., Matignon, L., Simonin, O., Batra, D., Wolf, C.,
and Chaplot, D. S. (2024). Autonerf: Training im-
plicit scene representations with autonomous agents.
In 2024 IEEE/RSJ International Conference on In-
telligent Robots and Systems (IROS), pages 13442–
13449. IEEE.
McInnes, L., Healy, J., Astels, S., et al. (2017). hdbscan:
Hierarchical density based clustering. J. Open Source
Softw., 2(11):205.
Mildenhall, B., Srinivasan, P. P., Ortiz-Cayon, R., Kalantari,
N. K., Ramamoorthi, R., Ng, R., and Kar, A. (2019).
Local light field fusion: Practical view synthesis with
VISAPP 2025 - 20th International Conference on Computer Vision Theory and Applications
608