
Probabilistic NeRF for 3D Shape Recovery in Scattered Medium
Yoshiki Ono, Fumihiko Sakaue and Jun Sato
Nagoya Insutitute of Technology, Nagoya, Japan
{y-ono@cv., sakaue@, junsato@}nitech.ac.jp
Keywords:
NeRF (Neural Radiance Fields), Ray Tracing, Scattering Medium, Stochastic Gradient Descent.
Abstract:
This research proposes a method for analyzing scene information including the characteristics of the medium
by representing the space where objects and scattering media such as fog and smoke exist using the NeRF
(Neural Radiance Fields) (Mildenhall et al., 2020) representation method of light ray fields. In this study,
we focus on the fact that the behavior of rays inside a scattering medium can be expressed probabilistically,
and show a method for rendering an image that changes in a probabilistic manner from only a single ray,
rather than the entire scattering. By combining this method with a scene representation using the stochastic
gradient descent method and a neural network, we show that it is possible to analyze scene information without
generating images that directly render light scattering.
1 INTRODUCTION
In recent years, an increasing number of automo-
biles are equipped with cameras and sensors to ac-
quire information on the surrounding environment.
By analyzing the information acquired by these sen-
sors and understanding the surrounding information,
safer driving can be achieved. Such methods are usu-
ally designed for use in a clear surrounding environ-
ment. However, if a scattering medium, such as fog
or smoke, is present in the scene, the observed im-
age will be affected by it, resulting in a blurred im-
age(Scadron et al., 1964; Tian et al., 2017). There-
fore, it is difficult to obtain appropriate results when
processing assumes a clear image. In particular, meth-
ods that recover three-dimensional information, such
as scene shape reconstruction, have complex ray be-
havior, making it difficult to recover appropriate in-
formation.
In order to eliminate the influence of such scat-
tering medium on cameras and sensors, and to accu-
rately acquire information about the surrounding en-
vironment, it is necessary to analyze the optical phe-
nomenon of light scattering that occurs when a ray
of light enters the scattering medium. However, in-
side the scattering medium, light changes its behavior
depending on whether or not it impacts on small par-
ticles. Therefore, a very complex ray space is formed
inside the medium, which is difficult to analyze di-
rectly. Various methods have been proposed to solve
this problem (Mukaigawa et al., 2010; K.Nayar et al.,
2006; Narasimhan et al., 2006; Naik et al., 2015; Ki-
tano et al., 2017; L.G. and J.L., 1941; Satat et al.,
2018; Narasimhan et al., 2006)
Nayar (K.Nayar et al., 2006) et al. proposed a
method for separating scattered light into a direct
component reflected on the object and a global com-
ponent scattered by the scattering medium using a
technique called high-frequency pattern projection.
Although this method can be applied to media of var-
ious densities, it requires multiple projection of the
modulation pattern and multiple imaging of the scene
to separate the light rays. Therefore, this method is
unsuitable for dynamic scenes.
Narasimhan et al.(Narasimhan et al., 2006) pro-
pose a method to estimate the characteristics of the
scattering medium itself, but this method requires the
condition that only the scattering medium can be mea-
sured independently. Satat et al.(Satat et al., 2018)
propose a method to analyze the information in the
scattering medium obtained by using a sensor to ob-
tain clear information by removing the effects of back
scattering. This method makes it possible to obtain in-
formation on the surrounding environment from sen-
sors mounted on automobiles and other vehicles in
foggy scenes. However, this method is difficult to
apply to general scenes because of its limited appli-
cability.
In recent years, a method for analyzing scenes us-
ing deep learning has been proposed, but it requires
a huge amount of training data to accurately analyze
complex scenes. To solve this problem, we propose a
Ono, Y., Sakaue, F. and Sato, J.
Probabilistic NeRF for 3D Shape Recovery in Scattered Medium.
DOI: 10.5220/0012473300003660
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2024) - Volume 3: VISAPP, pages
779-785
ISBN: 978-989-758-679-8; ISSN: 2184-4321
Proceedings Copyright © 2024 by SCITEPRESS – Science and Technology Publications, Lda.
779