
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T.,
Ramamoorthi, R., and Ng, R. (2020). NeRF: Repre-
senting scenes as neural radiance fields for view syn-
thesis. In Proc. the ECCV.
Ngo, T.-T., Nagahara, H., Nishino, K., Taniguchi, R.-i., and
Yagi, Y. (2019). Reflectance and shape estimation
with a light field camera under natural illumination.
IJCV, 127(11-12):1707–1722.
Penner, E. and Zhang, L. (2017). Soft 3D reconstruction
for view synthesis. ACM Transactions on Graphics
(TOG), 36(6):1–11.
Pozo, A. P., Toksvig, M., Schrager, T. F., Hsu, J., Mathur,
U., Sorkine-Hornung, A., Szeliski, R., and Cabral, B.
(2019). An integrated 6DoF video camera and sys-
tem design. ACM Transactions on Graphics (TOG),
38(6):1–16.
Riegler, G. and Koltun, V. (2020). Free view synthesis. In
Proc. the ECCV, pages 623–640. Springer.
Riegler, G. and Koltun, V. (2021). Stable view synthesis. In
Proc. the IEEE/CVF CVPR, pages 12216–12225.
Sch
¨
onberger, J. L. and Frahm, J.-M. (2016). Structure-
from-motion revisited. In Proc. the IEEE/CVF CVPR.
Srinivasan, P. P., Deng, B., Zhang, X., Tancik, M., Milden-
hall, B., and Barron, J. T. (2021). NeRV: Neural re-
flectance and visibility fields for relighting and view
synthesis. In Proc. the IEEE/CVF CVPR, pages 7495–
7504.
Srinivasan, P. P., Tucker, R., Barron, J. T., Ramamoorthi, R.,
Ng, R., and Snavely, N. (2019). Pushing the bound-
aries of view extrapolation with multiplane images. In
Proc. the IEEE/CVF CVPR, pages 175–184.
Sulc, A., Johannsen, O., and Goldluecke, B. (2018). In-
verse lightfield rendering for shape, reflection and nat-
ural illumination. In Energy Minimization Methods
in CVPR: 11th International Conference, EMMCVPR
2017, Venice, Italy, October 30–November 1, 2017,
Revised Selected Papers 11, pages 372–388. Springer.
Tao, M. W., Hadap, S., Malik, J., and Ramamoorthi, R.
(2013). Depth from combining defocus and corre-
spondence using light-field cameras. In Proc. the
IEEE/CVF CVPR, pages 673–680.
Tao, M. W., Su, J.-C., Wang, T.-C., Malik, J., and Ra-
mamoorthi, R. (2015). Depth estimation and specu-
lar removal for glossy surfaces using point and line
consistency with light-field cameras. IEEE transac-
tions on pattern analysis and machine intelligence,
38(6):1155–1169.
Teed, Z. and Deng, J. (2020). RAFT: Recurrent all-pairs
field transforms for optical flow. In Proc. the ECCV.
Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A.,
and Tumblin, J. (2007). Dappled photography: Mask
enhanced cameras for heterodyned light fields and
coded aperture refocusing. ACM Transactions on
Graphics (TOG), 26(3):69.
Wang, T.-C., Chandraker, M., Efros, A. A., and Ramamoor-
thi, R. (2016). SVBRDF-invariant shape and re-
flectance estimation from light-field cameras. In Proc.
the IEEE/CVF CVPR, pages 5451–5459.
Wang, Y., Liu, F., Wang, Z., Hou, G., Sun, Z., and Tan,
T. (2018). End-to-end view synthesis for light field
imaging with pseudo 4DCNN. In Proc. the ECCV,
pages 333–348.
Wizadwongsa, S., Phongthawee, P., Yenphraphai, J., and
Suwajanakorn, S. (2021). NeX: Real-time view syn-
thesis with neural basis expansion. In Proc. the
IEEE/CVF CVPR, pages 8534–8543.
Wood, D. N., Azuma, D. I., Aldinger, K., Curless, B.,
Duchamp, T., Salesin, D. H., and Stuetzle, W. (2000).
Surface light fields for 3D photography. In SIG-
GRAPH, pages 287–296.
Wu, G., Zhao, M., Wang, L., Dai, Q., Chai, T., and Liu, Y.
(2017). Light field reconstruction using deep convolu-
tional network on EPI. In Proc. the IEEE/CVF CVPR,
pages 6319–6327.
Yang, W., Chen, G., Chen, C., Chen, Z., and Wong, K.-
Y. K. (2022). PS-NeRF: Neural inverse rendering for
multi-view photometric stereo. In Proc. the ECCV.
Yao, Y., Zhang, J., Liu, J., Qu, Y., Fang, T., McKinnon, D.,
Tsin, Y., and Quan, L. (2022). Neilf: Neural incident
light field for physically-based material estimation. In
Proc. the ECCV, pages 700–716. Springer.
Zhang, K., Luan, F., Li, Z., and Snavely, N. (2022a). IRON:
Inverse rendering by optimizing neural sdfs and mate-
rials from photometric images. In Proc. the IEEE/CVF
CVPR, pages 5565–5574.
Zhang, K., Luan, F., Wang, Q., Bala, K., and Snavely,
N. (2021a). PhySG: Inverse rendering with spherical
gaussians for physics-based material editing and re-
lighting. In Proc. the IEEE/CVF CVPR, pages 5453–
5462.
Zhang, X., Srinivasan, P. P., Deng, B., Debevec, P., Free-
man, W. T., and Barron, J. T. (2021b). NeRFactor:
Neural factorization of shape and reflectance under an
unknown illumination. ACM Transactions on Graph-
ics (TOG), 40(6):1–18.
Zhang, Y., Sun, J., He, X., Fu, H., Jia, R., and Zhou, X.
(2022b). Modeling indirect illumination for inverse
rendering. In Proc. the IEEE/CVF CVPR.
Zhou, T., Tucker, R., Flynn, J., Fyffe, G., and Snavely, N.
(2018). Stereo magnification: Learning view synthe-
sis using multiplane images. In ACM Transactions on
Graphics (SIGGRAPH).
Editing Scene Illumination and Material Appearance of Light-Field Images
101