sults than state-of-the-art methods under dark envi-
ronments. As opposed to directly inputting all the
frames into the network together, when using RNN,
the network accumulates valid information frame by
frame and through the entire upsampling and down-
sampling process. The effective information of each
frame is aggregated with the previous aggregation re-
sult.Additionally, our model maps raw burst images
directly to sRGB outputs, either producing a best im-
age or generating a multi-frame denoised image se-
quence. As a consequence, our framework has a rel-
atively high level of flexibility, and opens up the pos-
sibility of expanding our framework to cover video as
well as image denoising. Finally, we have proven that
our framework is highly portable with a great deal of
cross-platform potential; therefore, the model trained
by one mobile phone can be directly applied to an-
other camera’s raw bursts without the necessity fine-
tuning, and a similar level of enhancement can be ex-
pected.
Currently we only focus on static scenes. If there
is an object moving in the scene, blur will inevitably
occur. In future work, however, we will consider
training our network with a data set containing mov-
ing objects and using a loss function that includes mo-
tion adaptive weights. By optimizing the network ar-
chitecture and training procedure, we expect to con-
tinue to yield further improvements in image quality.
REFERENCES
Buades, A., Coll, B., and Morel, J.-M. (2005). A non-local
algorithm for image denoising. In 2005 IEEE Com-
puter Society Conference on Computer Vision and
Pattern Recognition (CVPR’05), volume 2, pages 60–
65. IEEE.
Chen, C., Chen, Q., Xu, J., and Koltun, V. (2018). Learning
to see in the dark. In Proceedings of the IEEE Con-
ference on Computer Vision and Pattern Recognition,
pages 3291–3300.
Chen, H., Zhang, Y., Kalra, M. K., Lin, F., Chen, Y.,
Liao, P., Zhou, J., and Wang, G. (2017). Low-dose
ct with a residual encoder-decoder convolutional neu-
ral network. IEEE transactions on medical imaging,
36(12):2524–2535.
Dabov, K., Foi, A., and Egiazarian, K. (2007). Video de-
noising by sparse 3d transform-domain collaborative
filtering. In 2007 15th European Signal Processing
Conference, pages 145–149. IEEE.
Decker, R. and Trusov, M. (2010). Estimating aggre-
gate consumer preferences from online product re-
views. International Journal of Research in Market-
ing, 27(4):293–307.
El Gamal, A. and Eltoukhy, H. (2005). Cmos image sensors.
IEEE Circuits and Devices Magazine, 21(3):6–20.
Ghimpet¸eanu, G., Batard, T., Bertalm
´
ıo, M., and Levine,
S. (2016). A decomposition framework for image de-
noising algorithms. IEEE transactions on image pro-
cessing, 25(1):388–399.
Godard, C., Matzen, K., and Uyttendaele, M. (2018). Deep
burst denoising. In Proceedings of the European Con-
ference on Computer Vision (ECCV), pages 538–554.
Gu, S., Zhang, L., Zuo, W., and Feng, X. (2014). Weighted
nuclear norm minimization with application to image
denoising. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 2862–
2869.
Hasinoff, S. W., Sharlet, D., Geiss, R., Adams, A., Bar-
ron, J. T., Kainz, F., Chen, J., and Levoy, M. (2016).
Burst photography for high dynamic range and low-
light imaging on mobile cameras. ACM Transactions
on Graphics (TOG), 35(6):192.
Irum, I., Shahid, M., Sharif, M., and Raza, M. (2015). A
review of image denoising methods. Journal of Engi-
neering Science & Technology Review, 8(5).
Jain, P. and Tyagi, V. (2016). A survey of edge-preserving
image denoising methods. Information Systems Fron-
tiers, 18(1):159–170.
Maggioni, M., Boracchi, G., Foi, A., and Egiazarian, K.
(2011). Video denoising using separable 4d nonlocal
spatiotemporal transforms. In Image Processing: Al-
gorithms and Systems IX, volume 7870, page 787003.
International Society for Optics and Photonics.
Mikolov, T., Karafi
´
at, M., Burget, L.,
ˇ
Cernock
`
y, J., and
Khudanpur, S. (2010). Recurrent neural network
based language model. In Eleventh annual confer-
ence of the international speech communication as-
sociation.
Plotz, T. and Roth, S. (2017). Benchmarking denoising al-
gorithms with real photographs. In Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition, pages 1586–1595.
Remez, T., Litany, O., Giryes, R., and Bronstein, A. M.
(2017). Deep convolutional denoising of low-light im-
ages. arXiv preprint arXiv:1701.01687.
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net:
Convolutional networks for biomedical image seg-
mentation. In International Conference on Medical
image computing and computer-assisted intervention,
pages 234–241. Springer.
Schwartz, E., Giryes, R., and Bronstein, A. M. (2019).
Deepisp: Toward learning an end-to-end image pro-
cessing pipeline. IEEE Transactions on Image Pro-
cessing, 28(2):912–923.
Seetzen, H., Heidrich, W., Stuerzlinger, W., Ward, G.,
Whitehead, L., Trentacoste, M., Ghosh, A., and
Vorozcovs, A. (2004). High dynamic range dis-
play systems. ACM transactions on graphics (TOG),
23(3):760–768.
Zhang, K., Zuo, W., Chen, Y., Meng, D., and Zhang, L.
(2017). Beyond a gaussian denoiser: Residual learn-
ing of deep cnn for image denoising. IEEE Transac-
tions on Image Processing, 26(7):3142–3155.