
REFERENCES
Abdelhamed, A., Lin, S., and Brown, M. S. (2018). A high-
quality denoising dataset for smartphone cameras. In
CVPR, pages 1692–1700.
Caruana, R. (1997). Multitask learning. Mach. Learn.,
28(1):41–75.
Chen, L., Chu, X., Zhang, X., and Sun, J. (2022). Simple
baselines for image restoration. In ECCV, Part VII,
volume 13667, pages 17–33.
Chen, L., Lu, X., Zhang, J., Chu, X., and Chen, C. (2021).
Hinet: Half instance normalization network for image
restoration. In CVPR, pages 182–192.
Cheng, S., Wang, Y., Huang, H., Liu, D., Fan, H., and Liu,
S. (2021). Nbnet: Noise basis learning for image de-
noising with subspace projection. In CVPR, pages
4896–4906.
Cho, S., Ji, S., Hong, J., Jung, S., and Ko, S. (2021). Re-
thinking coarse-to-fine approach in single image de-
blurring. In ICCV 2021, pages 4621–4630.
Crawshaw, M. (2020). Multi-task learning with deep neural
networks: A survey. CoRR, abs/2009.09796.
Drira, F., Lebourgeois, F., and Emptoz, H. (2012). A new
pde-based approach for singularity-preserving regu-
larization: application to degraded characters restora-
tion. IJDAR, 15(3):183–212.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A. C., and Ben-
gio, Y. (2014). Generative adversarial nets. In NIPS
2014, pages 2672–2680.
Guemri, K., Drira, F., Walha, R., Alimi, A. M., and Lebour-
geois, F. (2017). Edge based blind single image de-
blurring with sparse priors. In VISAPP 2017, pages
174–181.
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and
Courville, A. C. (2017). Improved training of wasser-
stein gans. In NIPS 2017, pages 5767–5777.
Guo, S., Yan, Z., Zhang, K., Zuo, W., and Zhang, L.
(2019). Toward convolutional blind denoising of real
photographs. In CVPR 2019, pages 1712–1722.
Harizi, R., Walha, R., and Drira, F. (2022a). Deep-learning
based end-to-end system for text reading in the wild.
Multim. Tools Appl., 81(17):24691–24719.
Harizi, R., Walha, R., Drira, F., and Zaied, M. (2022b).
Convolutional neural network with joint stepwise
character/word modeling based system for scene text
recognition. Multim. Tools Appl., 81(3):3091–3106.
Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Per-
ceptual losses for real-time style transfer and super-
resolution. In ECCV, Part II, volume 9906, pages
694–711. Springer.
Koh, J., Lee, J., and Yoon, S. (2021). Single-image de-
blurring with neural networks: A comparative survey.
Comput. Vis. Image Underst., 203:103134.
Kupyn, O., Martyniuk, T., Wu, J., and Wang, Z.
(2019). Deblurgan-v2: Deblurring (orders-of-
magnitude) faster and better. In ICCV 2019, pages
8877–8886.
Li, X., Wu, J., Lin, Z., Liu, H., and Zha, H. (2018). Recur-
rent squeeze-and-excitation context aggregation net
for single image deraining. In ECCV, Part VII, vol-
ume 11211, pages 262–277.
Lin, T., Doll
´
ar, P., Girshick, R. B., He, K., Hariharan, B.,
and Belongie, S. J. (2017). Feature pyramid networks
for object detection. In CVPR 2017, pages 936–944.
Liu, X., Suganuma, M., Luo, X., and Okatani, T. (2019).
Restoring images with unknown degradation factors
by recurrent use of a multi-branch network. arXiv:
CVPR.
Martyniuk, T. (2019). Multi-task learning for image
restoration. PhD thesis, Faculty of Applied Sciences,
Ukrain.
Nah, S., Kim, T. H., and Lee, K. M. (2017). Deep multi-
scale convolutional neural network for dynamic scene
deblurring. In CVPR, pages 257–265.
Ren, W., Zhang, J., Pan, J., Liu, S., Ren, J. S., Du, J., Cao,
X., and Yang, M. (2022). Deblurring dynamic scenes
via spatially varying recurrent neural networks. IEEE
Trans. Pattern Anal. Mach. Intell., 44(8):3974–3987.
Sandler, M., Howard, A. G., Zhu, M., Zhmoginov, A., and
Chen, L. (2018). Mobilenetv2: Inverted residuals and
linear bottlenecks. In CVPR, pages 4510–4520.
Tao, X., Gao, H., Shen, X., Wang, J., and Jia, J. (2018).
Scale-recurrent network for deep image deblurring. In
CVPR, pages 8174–8182.
Tu, Z., Talebi, H., Zhang, H., Yang, F., Milanfar, P., Bovik,
A., and Li, Y. (2022). Maxim: Multi-axis mlp for
image processing. In CVPR, pages 5759–5770.
Walha, R., Drira, F., Alimi, A. M., Lebourgeois, F., and Gar-
cia, C. (2014). A sparse coding based approach for the
resolution enhancement and restoration of printed and
handwritten textual images. In ICFHR 2014, pages
696–701.
Walha, R., Drira, F., Lebourgeois, F., Garcia, C., and Alimi,
A. M. (2013). Single textual image super-resolution
using multiple learned dictionaries based sparse cod-
ing. In ICIAP 2013, Part II, volume 8157, pages 439–
448.
Walha, R., Drira, F., Lebourgeois, F., Garcia, C., and Alimi,
A. M. (2015). Joint denoising and magnification of
noisy low-resolution textual images. In ICDAR 2015,
pages 871–875.
Walha, R., Drira, F., Lebourgeois, F., Garcia, C., and Alimi,
A. M. (2018). Handling noise in textual image res-
olution enhancement using online and offline learned
dictionaries. Int. J. Document Anal. Recognit., 21(1-
2):137–157.
Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., and Li, H.
(2022). Uformer: A general u-shaped transformer for
image restoration. In CVPR, pages 17662–17672.
Xia, H., Wu, B., Tan, Y., Tang, X., and Song, S. (2022).
Mfc-net: Multi-scale fusion coding network for image
deblurring. Appl. Intell., 52(11):13232–13249.
Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan,
F. S., and Yang, M. (2022). Restormer: Efficient
transformer for high-resolution image restoration. In
CVPR, pages 5718–5729.
Zamir, S. W., Arora, A., Khan, S. H., Hayat, M., Khan, F. S.,
Yang, M., and Shao, L. (2020). Learning enriched
features for real image restoration and enhancement.
In ECCV, Part XXV, volume 12370, pages 492–511.
Zamir, S. W., Arora, A., Khan, S. H., Hayat, M., Khan, F. S.,
Yang, M., and Shao, L. (2021). Multi-stage progres-
sive image restoration. In CVPR, pages 14821–14831.
A Multi-Task Learning Framework for Image Restoration Using a Novel Generative Adversarial Network
935