
the learning of NeRF is accelerated. The combination
of these two sampling methods makes the learning of
NeRF more stable.
In our experiments, we compare each sampling
method with conventional NeRF and InfoNeRF,
which corresponds to a small amount of data. By us-
ing Edge sampling, we have achieved stable learning
for any training data. SE sampling also achieves ac-
celeration of learning. In the case of the small amount
of data, Edge sampling not only stabilizes the learning
but also tends to improve the PSNR results. These re-
sults indicate that edges are very important for NeRF
training.
Our proposed method has two limitations. The
first is that the hyperparameter for how far Edge sam-
pling should be performed must be set appropriately.
Research will be needed on pixels sampling methods
that can learn NeRF for any data without setting this
hyperparameter. The second is that SE sampling re-
quires a large amount of memory space because the
squared error of twice as many pixels used for train-
ing must be calculated. So, we need to consider pix-
els sampling methods that are more efficient with the
number of pixels used for training.
REFERENCES
Ali, M. and Clausi, D. (2001). Using the canny edge de-
tector for feature extraction and enhancement of re-
mote sensing images. In IGARSS 2001. Scanning the
Present and Resolving the Future. Proceedings. IEEE
2001 International Geoscience and Remote Sensing
Symposium (Cat. No. 01CH37217), volume 5, pages
2298–2300. Ieee.
Barron, J. T., Mildenhall, B., Verbin, D., Srinivasan, P. P.,
and Hedman, P. (2022). Mip-nerf 360: Unbounded
anti-aliased neural radiance fields. In Proceedings of
the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 5470–5479.
Canny, J. (1986). A computational approach to edge de-
tection. IEEE Transactions on pattern analysis and
machine intelligence, (6):679–698.
Deng, K., Liu, A., Zhu, J.-Y., and Ramanan, D. (2022).
Depth-supervised nerf: Fewer views and faster train-
ing for free. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition,
pages 12882–12891.
Du, Y., Zhang, Y., Yu, H.-X., Tenenbaum, J. B., and Wu,
J. (2021). Neural radiance flow for 4d view synthe-
sis and video processing. In 2021 IEEE/CVF Interna-
tional Conference on Computer Vision (ICCV), pages
14304–14314. IEEE Computer Society.
Fukuda, K., Kurita, T., and Aizawa, H. (2023). Neural
radiance fields with regularizer based on differences
of neighboring pixels. In 2023 International Joint
Conference on Neural Networks (IJCNN), pages 1–7.
IEEE.
Gai, Z., Liu, Z., Tan, M., Ding, J., Yu, J., Tong, M., and
Yuan, J. (2023). Egra-nerf: Edge-guided ray alloca-
tion for neural radiance fields. Image and Vision Com-
puting, 134:104670.
Glorot, X., Bordes, A., and Bengio, Y. (2011). Deep sparse
rectifier neural networks. In Proceedings of the four-
teenth international conference on artificial intelli-
gence and statistics, pages 315–323. JMLR Workshop
and Conference Proceedings.
Jang, W. and Agapito, L. (2021). Codenerf: Disentangled
neural radiance fields for object categories. In Pro-
ceedings of the IEEE/CVF International Conference
on Computer Vision, pages 12949–12958.
Kim, M., Seo, S., and Han, B. (2022). Infonerf: Ray en-
tropy minimization for few-shot neural volume ren-
dering. In Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition, pages
12912–12921.
Lin, C.-H., Ma, W.-C., Torralba, A., and Lucey, S. (2021).
Barf: Bundle-adjusting neural radiance fields. In Pro-
ceedings of the IEEE/CVF International Conference
on Computer Vision, pages 5741–5751.
Martin-Brualla, R., Radwan, N., Sajjadi, M. S., Barron,
J. T., Dosovitskiy, A., and Duckworth, D. (2021). Nerf
in the wild: Neural radiance fields for unconstrained
photo collections. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recogni-
tion, pages 7210–7219.
Meng, Q., Chen, A., Luo, H., Wu, M., Su, H., Xu, L., He,
X., and Yu, J. (2021). Gnerf: Gan-based neural ra-
diance field without posed camera. In Proceedings of
the IEEE/CVF International Conference on Computer
Vision, pages 6351–6361.
Metzer, G., Richardson, E., Patashnik, O., Giryes, R., and
Cohen-Or, D. (2023). Latent-nerf for shape-guided
generation of 3d shapes and textures. In Proceedings
of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 12663–12673.
Mildenhall, B., Srinivasan, P., Tancik, M., Barron, J., Ra-
mamoorthi, R., and Ng, R. (2020). Nerf: Represent-
ing scenes as neural radiance fields for view synthesis.
In European conference on computer vision.
Nguyen, T., Chen, Z., and Lee, J. (2020). Dataset meta-
learning from kernel ridge-regression. arXiv preprint
arXiv:2011.00050.
Nguyen, T., Novak, R., Xiao, L., and Lee, J. (2021).
Dataset distillation with infinitely wide convolutional
networks. Advances in Neural Information Processing
Systems, 34:5186–5198.
Niemeyer, M., Barron, J. T., Mildenhall, B., Sajjadi, M. S.,
Geiger, A., and Radwan, N. (2022). Regnerf: Regu-
larizing neural radiance fields for view synthesis from
sparse inputs. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition,
pages 5480–5490.
Oechsle, M., Peng, S., and Geiger, A. (2021). Unisurf:
Unifying neural implicit surfaces and radiance fields
for multi-view reconstruction. In Proceedings of the
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
110