
Figure 2: Average PSNR on set5 across the GD iterations
for the solution of a noise-less completion problem with
90% of the pixels missing (see table 1).
ure 2 is reconstructed with a step-size of 0.025, which
is the largest step-size used in testing, while most of
the other problems use much smaller step-sizes and
do not exhibit the same phenomenon.
One big issue with any GD-based scheme so far
is that they are quite slow to test (and to train, if one
uses a DEQGD approach). The test we performed had
1500 forward iterations, as do the tests in (Fermanian
et al., 2023), meaning this is much slower than the
pnp ADMM used for the MTDEQ (hyper-parameters
for the pnp ADMM algorithm used can also be found
in (Fermanian et al., 2023)).
Further investigation of different hyper-
parameters for the training of a RERG could
provide even better performance on the tasks con-
sidered and improve convergence speed at testing.
There are procedures that can be used to speed up FP
calculations, like the method from (Bai et al., 2021)
or the correction terms from (Bai et al., 2022), that
could be incorporated to speed up inference and FP
estimation in training.
7 CONCLUSION
In this paper, we introduced an upper bound that can
be used for the training of a GD procedure as both a
training objective and a regularization. We compared
four different types of RGs on a range of different in-
verse problems and discussed some of the differences,
showing that the use of an upper bound for regulariza-
tion to create a RERG can mitigate some of the disad-
vantages of the DEQGD and the RG1.
So far, few investigations have been done on RGs
and we extended the theoretical framework intro-
duced in (Fermanian et al., 2023) while proposing
two novel ways of training RGs. We compared the
resulting RGs and demonstrated that the RERG that
combines both the upper bound and a DEQGD pro-
duces strong reconstruction results across all the in-
verse problems considered.
REFERENCES
Agustsson, E. and Timofte, R. (2017). Ntire 2017 challenge
on single image super-resolution: Dataset and study.
In CVPR workshops, pages 126–135. IEEE.
Arbelaez, P., Maire, M., Fowlkes, C., and Malik, J. (2011).
Contour detection and hierarchical image segmenta-
tion. IEEE TPAMI, 33(5):898–916.
Bai, S., Geng, Z., Savani, Y., and Kolter, J. Z. (2022). Deep
equilibrium optical flow estimation. In Proceedings
of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 620–630.
Bai, S., Kolter, J. Z., and Koltun, V. (2019). Deep equilib-
rium models. arXiv preprint arXiv:1909.01377.
Bai, S., Koltun, V., and Kolter, J. Z. (2021). Neural deep
equilibrium solvers. In International Conference on
Learning Representations.
Bauschke, H. H., Combettes, P. L., Bauschke, H. H., and
Combettes, P. L. (2017). Convex Analysis and Mono-
tone Operator Theory in Hilbert Spaces. Springer.
Bevilacqua, M., Roumy, A., Guillemot, C., and Alberi-
Morel, M. L. (2012). Low-complexity single-image
super-resolution based on nonnegative neighbor em-
bedding. In BMVC, pages 135.1–135.10. BMVA
press.
Chan, S. H., Wang, X., and Elgendy, O. A. (2016). Plug-
and-play admm for image restoration: Fixed-point
convergence and applications. IEEE TCI, 3(1):84–98.
Fermanian, R., Le Pendu, M., and Guillemot, C. (2023).
Pnp-reg: Learned regularizing gradient for plug-and-
play gradient descent. SIIMS, 16(2):585–613.
Fung, S. W., Heaton, H., Li, Q., McKenzie, D., Osher,
S., and Yin, W. (2022). Jfb: Jacobian-free back-
propagation for implicit networks. In Proceedings of
the AAAI Conference on Artificial Intelligence, vol-
ume 36, pages 6648–6656.
Gilton, D., Ongie, G., and Willett, R. (2021). Deep equi-
librium architectures for inverse problems in imaging.
IEEE TCI, 7:1123–1133.
Kingma, D. P. and Ba, J. (2014). Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Le Pendu, M. and Guillemot, C. (2023). Preconditioned
plug-and-play admm with locally adjustable denoiser
for image restoration. SIIMS, 16(1):393–422.
Lim, B., Son, S., Kim, H., Nah, S., and Lee, K. M. (2017).
Enhanced deep residual networks for single image
super-resolution. In CVPR. IEEE.
Ling, Z., Xie, X., Wang, Q., Zhang, Z., and Lin, Z. (2022).
Global convergence of over-parameterized deep equi-
librium models. arXiv preprint arXiv:2205.13814.
Ma, K., Duanmu, Z., Wu, Q., Wang, Z., Yong, H., Li, H.,
and Zhang, L. (2016). Waterloo exploration database:
New challenges for image quality assessment models.
IEEE TIP, 26(2):1004–1016.
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
152