
Li, D., Shi, X., Zhang, Y., Cheung, K.-T., See, S., Wang, X.,
and Li, H. (2023). A simple baseline for video restora-
tion with grouped spatial-temporal shift. In Proceed-
ings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR), pages 9822–9832.
Liang, H., Wu, T., Hanji, P., Banterle, F., Gao, H., Mantiuk,
R., and Oztireli, C. (2023). Perceptual quality assess-
ment of nerf and neural view synthesis methods for
front-facing views. arXiv preprint arXiv:2303.15206.
Ma, L., Li, X., Liao, Z., Zhang, Q., Wang, X., Wang,
J., and Sander, P. V. (2021). Deblur-nerf: Neural
radiance fields from blurry images. arXiv preprint
arXiv:2111.14292.
Mao, X., Liu, Y., Liu, F., Li, Q., Shen, W., and Wang, Y.
(2023). Intriguing findings of frequency selection for
image deblurring. In Proceedings of the AAAI Con-
ference on Artificial Intelligence, volume 37, pages
1905–1913.
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T.,
Ramamoorthi, R., and Ng, R. (2020). Nerf: Repre-
senting scenes as neural radiance fields for view syn-
thesis. In Proceedings of the European Conference on
Computer Vision (ECCV), pages 405–421. Springer.
Peng, C. and Chellappa, R. (2023). Pdrf: Progressively
deblurring radiance field for fast scene reconstruction
from blurry images. In Proceedings of the AAAI Con-
ference on Artificial Intelligence, volume 37, pages
2029–2037.
Richardson, W. H. (1972). Bayesian-based iterative method
of image restoration. JOSA, 62(1):55–59.
Rosebrock, A. (2020). Opencv fast fourier transform (fft)
for blur detection in images and video streams. Ac-
cessed: 2024-12-16.
Rubloff, M. (2023). What are the nerf metrics? Accessed:
2023-12-19.
Schonberger, J. L. and Frahm, J.-M. (2016). Structure-
from-motion revisited. In Proceedings of the IEEE
conference on computer vision and pattern recogni-
tion, pages 4104–4113.
Smith, L. (2012). Estimating an image’s blur kernel from
edge intensity profiles. Technical report, Naval Re-
search Laboratory.
Wang, P., Zhao, L., Ma, R., and Liu, P. (2023). Bad-nerf:
Bundle adjusted deblur neural radiance fields. In Pro-
ceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), pages 4170–
4179.
Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P.
(2004). Image quality assessment: From error visi-
bility to structural similarity. IEEE Transactions on
Image Processing, 13(4):600–612. Accessed: 2023-
12-19.
Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S.,
and Yang, M.-H. (2022). Restormer: Efficient trans-
former for high-resolution image restoration. In Pro-
ceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), pages 5728–
5739.
Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S.,
Yang, M.-H., and Shao, L. (2021). Multi-stage pro-
gressive image restoration. CoRR, abs/2102.02808.
Zhang, K., Ren, W., Luo, W., Lai, W.-S., Stenger, B., Yang,
M.-H., and Li, H. (2022). Deep image deblurring: A
survey. arXiv preprint arXiv:2202.10881.
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang,
O. (2018). The unreasonable effectiveness of deep
features as a perceptual metric. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 586–595.
Zhong, Z., Cao, M., Ji, X., Zheng, Y., and Sato, I. (2023).
Blur interpolation transformer for real-world motion
from blur. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 5713–5723.
Rethinking Deblurring Strategies for 3D Reconstruction: Joint Optimization vs. Modular Approaches
823