
Gonz
´
alez, M., Almansa, A., and Tan, P. (2022). Solving in-
verse problems by joint posterior maximization with
autoencoding prior. SIAM Journal on Imaging Sci-
ences, 15(2):822–859.
Harvey, W., Naderiparizi, S., and Wood, F. (2022). Con-
ditional image generation by conditioning variational
auto-encoders. In ICLR.
Havtorn, J. D., Frellsen, J., Hauberg, S., and Maaløe, L.
(2021). Hierarchical vaes know what they don’t know.
In ICML, pages 4117–4128. PMLR.
Hazami, L., Mama, R., and Thurairatnam, R. (2022).
Efficient-vdvae: Less is more.
Ho, J., Jain, A., and Abbeel, P. (2020). Denoising diffusion
probabilistic models. NeurIPS, 33:6840–6851.
Jiang, L. (2022). Image super-resolution via it-
erative refinement. https://github.com/Janspiry/
Image-Super-Resolution-via-Iterative-Refinement.
Kang, M., Zhu, J.-Y., Zhang, R., Park, J., Shechtman, E.,
Paris, S., and Park, T. (2023). Scaling up gans for
text-to-image synthesis. In IEEE/CVF CVPR, pages
10124–10134.
Karras, T., Laine, S., and Aila, T. (2019). A style-based
generator architecture for generative adversarial net-
works. In IEEE/CVF CVPR, pages 4401–4410.
Kawar, B., Elad, M., Ermon, S., and Song, J. (2022). De-
noising diffusion restoration models. arXiv preprint
arXiv:2201.11793.
Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X.,
Sutskever, I., and Welling, M. (2016). Improved vari-
ational inference with inverse autoregressive flow. Ad-
vances in neural information processing systems, 29.
Kingma, D. P. and Welling, M. (2013). Auto-encoding vari-
ational bayes. arXiv preprint arXiv:1312.6114.
Lepcha, D. C., Goyal, B., Dogra, A., and Goyal, V. (2022).
Image super-resolution: A comprehensive review, re-
cent trends, challenges and applications. Information
Fusion.
Li, H., Yang, Y., Chang, M., Chen, S., Feng, H., Xu, Z.,
Li, Q., and Chen, Y. (2022). Srdiff: Single image
super-resolution with diffusion probabilistic models.
Neurocomputing, 479:47–59.
Liang, J., Lugmayr, A., Zhang, K., Danelljan, M.,
Van Gool, L., and Timofte, R. (2021). Hierarchi-
cal conditional flow: A unified framework for image
super-resolution and image rescaling. In IEEE/CVF
ICCV, pages 4076–4085.
Lugmayr, A., Danelljan, M., Gool, L. V., and Timofte, R.
(2020). Srflow: Learning the super-resolution space
with normalizing flow. In European conference on
computer vision, pages 715–732. Springer.
Lugmayr, A., Danelljan, M., and Timofte, R. (2021). Ntire
2021 learning the super-resolution space challenge. In
IEEE/CVF CVPR, pages 596–612.
Lugmayr, A., Danelljan, M., Timofte, R., Kim, K.-w., Kim,
Y., Lee, J.-y., Li, Z., Pan, J., Shim, D., Song, K.-U.,
et al. (2022). Ntire 2022 challenge on learning the
super-resolution space. In IEEE/CVF CVPR, pages
786–797.
Marinescu, R. V., Moyer, D., and Golland, P. (2020).
Bayesian image reconstruction using deep generative
models. arXiv preprint arXiv:2012.04567.
Mattei, P.-A. and Frellsen, J. (2018). Leveraging the exact
likelihood of deep latent variable models. NeurIPS,
31.
Menon, S., Damian, A., Hu, S., Ravi, N., and Rudin, C.
(2020). Pulse: Self-supervised photo upsampling via
latent space exploration of generative models. In
IEEE/CVF CVPR, pages 2437–2445.
Mittal, A., Moorthy, A. K., and Bovik, A. C. (2012).
No-reference image quality assessment in the spa-
tial domain. IEEE Trans. on Image Processing,
21(12):4695–4708.
Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C. C., and
Luo, P. (2021). Exploiting deep generative prior for
versatile image restoration and manipulation. IEEE
Trans. on Pattern Analysis and Machine Intelligence,
44(11):7474–7489.
Poirier-Ginter, Y. and Lalonde, J.-F. (2023). Robust un-
supervised stylegan image restoration. In IEEE/CVF
CVPR, pages 22292–22301.
Prost, J., Houdard, A., Almansa, A., and Papadakis, N.
(2023). Inverse problem regularization with hierar-
chical variational autoencoders. In IEEE/CVF ICCV,
pages 22894–22905.
Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar,
Y., Shapiro, S., and Cohen-Or, D. (2021). Encoding
in style: a stylegan encoder for image-to-image trans-
lation. In IEEE/CVF CVPR, pages 2287–2296.
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Om-
mer, B. (2022). High-resolution image synthesis with
latent diffusion models. In IEEE/CVF CVPR, pages
10684–10695.
Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., and
Norouzi, M. (2021). Image super-resolution via itera-
tive refinement. arXiv preprint arXiv:2104.07636.
Sønderby, C. K., Raiko, T., Maalø e, L., Sø nderby, S. r. K.,
and Winther, O. (2016). How to train deep variational
autoencoders and probabilistic ladder networks. In
NeurIPS, volume 29.
Song, J., Vahdat, A., Mardani, M., and Kautz, J. (2023).
Pseudoinverse-Guided Diffusion Models for Inverse
Problems. In (ICLR) ICLR.
Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er-
mon, S., and Poole, B. (2021). Score-based generative
modeling through stochastic differential equations. In
ICLR.
Vahdat, A. and Kautz, J. (2020). Nvae: A deep hi-
erarchical variational autoencoder. arXiv preprint
arXiv:2007.03898.
Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P.
(2004). Image quality assessment: from error visi-
bility to structural similarity. IEEE transactions on
image processing, 13(4):600–612.
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang,
O. (2018). The unreasonable effectiveness of deep
features as a perceptual metric. In IEEE CVPR, pages
586–595.
Zhou, H., Huang, C., Gao, S., and Zhuang, X. (2021).
Vspsr: Explorable super-resolution via variational
sparse representation. In IEEE/CVF CVPR, pages
373–381.
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
400