
6 CONCLUSION
Gaussian Splatting has demonstrated its ability
to surpass state-of-the-art reconstruction methods
in quality; however, challenges such as under-
reconstruction, artifacts, and the omission of impor-
tant details, particularly in background regions high-
light areas for improvement. These limitations often
arise from imprecise densification. To address these
limitations, we build on recent advances in Adaptive
Density Control for 3DGS and propose several novel
improvements: a correction mechanism for scene ex-
tent, an exponentially ascending gradient threshold,
and significance-aware pruning.
Our comprehensive evaluation demonstrates that
combining these techniques effectively addresses
these challenges, resulting in improved reconstruc-
tion quality while maintaining a manageable number
of Gaussian primitives. Although some of the mod-
ifications only bring minor improvements, all of the
components are straightforward to implement into ex-
isting 3DGS frameworks, providing a practical and
efficient enhancement to previous methods.
ACKNOWLEDGEMENTS
This work has partly been funded by the Ger-
man Research Foundation (project 3DIL, grant
no. 502864329), the German Federal Ministry of
Education and Research (project VoluProf, grant
no. 16SV8705), and the European Commission (Hori-
zon Europe project Luminous, grant no. 101135724).
REFERENCES
Bagdasarian, M. T., Knoll, P., Li, Y.-H., Barthel, F., Hils-
mann, A., Eisert, P., and Morgenstern, W. (2024).
3dgs.zip: A survey on 3d gaussian splatting compres-
sion methods.
Barron, J. T., Mildenhall, B., Tancik, M., Hedman, P.,
Martin-Brualla, R., and Srinivasan, P. P. (2021). Mip-
nerf: A multiscale representation for anti-aliasing
neural radiance fields. ICCV.
Barthel, F., Beckmann, A., Morgenstern, W., Hilsmann, A.,
and Eisert, P. (2024). Gaussian splatting decoder for
3d-aware generative adversarial networks. In Pro-
ceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR) Workshops,
pages 7963–7972.
Botsch, M., Sorkine-Hornung, A., Zwicker, M., and
Kobbelt, L. (2005). High-quality surface splatting on
today’s gpus. pages 17– 141.
Bul
`
o, S. R., Porzi, L., and Kontschieder, P. (2024). Revising
densification in gaussian splatting.
Fan, Z., Wang, K., Wen, K., Zhu, Z., Xu, D., and Wang, Z.
(2023). Lightgaussian: Unbounded 3d gaussian com-
pression with 15x reduction and 200+ fps.
Fang, G. and Wang, B. (2024). Mini-splatting: Represent-
ing scenes with a constrained number of gaussians. In
Leonardis, A., Ricci, E., Roth, S., Russakovsky, O.,
Sattler, T., and Varol, G., editors, Computer Vision –
ECCV 2024, pages 165–181, Cham. Springer Nature
Switzerland.
Fridovich-Keil and Yu, Tancik, M., Chen, Q., Recht, B.,
and Kanazawa, A. (2022). Plenoxels: Radiance fields
without neural networks. In CVPR.
Hedman, P., Philip, J., Price, T., Frahm, J.-M., Drettakis,
G., and Brostow, G. (2018). Deep blending for
free-viewpoint image-based rendering. ACM Trans.
Graph., 37(6).
Kerbl, B., Kopanas, G., Leimk
¨
uhler, T., and Drettakis,
G. (2023). 3d gaussian splatting for real-time radi-
ance field rendering. ACM Transactions on Graphics,
42(4).
Kim, S., Lee, K., and Lee, Y. (2024). Color-cued efficient
densification method for 3d gaussian splatting. In Pro-
ceedings of the IEEE/CVF CVPR Workshops, pages
775–783.
Kingma, D. P. and Ba, J. (2017). Adam: A method for
stochastic optimization.
Knapitsch, A., Park, J., Zhou, Q.-Y., and Koltun, V. (2017).
Tanks and temples: benchmarking large-scale scene
reconstruction. ACM Trans. Graph., 36(4).
Liu, K., Zhan, F., Xu, M., Theobalt, C., Shao, L., and Lu, S.
(2024). Stylegaussian: Instant 3d style transfer with
gaussian splatting. arXiv preprint arXiv:2403.07807.
Luiten, J., Kopanas, G., Leibe, B., and Ramanan, D. (2024).
Dynamic 3d gaussians: Tracking by persistent dy-
namic view synthesis. In 3DV.
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T.,
Ramamoorthi, R., and Ng, R. (2020). Nerf: Repre-
senting scenes as neural radiance fields for view syn-
thesis. In ECCV.
M
¨
uller, T., Evans, A., Schied, C., and Keller, A. (2022).
Instant neural graphics primitives with a multiresolu-
tion hash encoding. ACM Trans. Graph., 41(4):102:1–
102:15.
Pfister, H., Zwicker, M., van Baar, J., and Gross, M.
(2000). Surfels: surface elements as rendering prim-
itives. In Proceedings of the 27th Annual Conference
on Computer Graphics and Interactive Techniques,
SIGGRAPH ’00, page 335–342, USA.
Ren, L., Pfister, H., and Zwicker, M. (2002). Object space
ewa surface splatting: A hardware accelerated ap-
proach to high quality point rendering. Computer
Graphics Forum, 21.
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang,
O. (2018). The unreasonable effectiveness of deep
features as a perceptual metric.
Zhang, Z., Hu, W., Lao, Y., He, T., and Zhao, H. (2024).
Pixel-gs: Density control with pixel-aware gradient
for 3d gaussian splatting.
Zwicker, M., Pfister, H., Baar, J., and Gross, M. (2001).
Ewa volume splatting. IEEE Visualization.
Improving Adaptive Density Control for 3D Gaussian Splatting
619