5 CONCLUSIONS AND
PERSPECTIVES
In conclusion, the use of a classifier to identify the
type of damage of an image and, consequently, calling
pre-trained models to restore the damage type found
in the image, shows very good results. Knowing what
kind of damage the image has avoids having to call
overly complex and time-consuming models to re-
store minor damage to an image. Therefore, it is nec-
essary to train a very good classifier model. Thus, the
use of the transfer learning technique has been shown
to have very good results in the creation of image clas-
sifiers, in addition, requiring less training time and
data for it. Since generative models are applied to
more domains (Pautrat-Lertora et al., 2022).
Furthermore, it has been shown that when an im-
age has several types of damage, it is necessary to
know in what order to use the different models re-
sponsible for restoring the different types of damage,
since the incorrect order of the use of these models
leads to a lower quality in the restored image. Thus,
to know which model to use first in case the image has
several types of damage, the PSNR and SSIM metrics
can be used.
While a classifier is a good first choice for restor-
ing an image based on the type of damage, our clas-
sifier only classifies if an image is blurry, has cracks,
or both types of damage. Therefore, as future work,
more types of damage could be established for the
classifier, such as lack of color, missing parts of an
image, water damage, among others. Also, you could
create a restore model that is capable of restoring all
kinds of damage from an image, although this might
cause a very high execution time, which might not
be very convenient if you plan to use the model in
some application for people’s daily use, similar to
(Ysique-Neciosup et al., 2022; Castillo-Arredondo
et al., 2023).
REFERENCES
Cao, J., Zhang, Z., Zhao, A., Cui, H., and Zhang, Q. (2020).
Ancient mural restoration based on a modified gener-
ative adversarial network. Heritage Science, 8(1):7.
Castillo-Arredondo, G., Moreno-Carhuacusma, D., and
Ugarte, W. (2023). Photohandler: Manipulation of
portrait images with stylegans using text. In ICSBT,
pages 73–82. SCITEPRESS.
Chen, Y., Liu, L., Tao, J., Xia, R., Zhang, Q., Yang, K.,
Xiong, J., and Chen, X. (2021). The improved image
inpainting algorithm via encoder and similarity con-
straint. Vis. Comput., 37(7):1691–1705.
Cheng, J., Yang, Y., Tang, X., Xiong, N., Zhang, Y.,
and Lei, F. (2020). Generative adversarial networks:
A literature review. KSII Trans. Internet Inf. Syst.,
14(12):4625–4647.
Fanfani, M., Colombo, C., and Bellavia, F. (2021). Restora-
tion and enhancement of historical stereo photos. J.
Imaging, 7(7):103.
Ferreira, I., Ochoa, L., and Koeshidayatullah, A. (2022).
On the generation of realistic synthetic petrographic
datasets using a style-based GAN. Scientific Reports,
12(1).
Fu, X. (2021). Research and application of ancient chinese
pattern restoration based on deep convolutional neural
network. Comput. Intell. Neurosci., 2021:2691346:1–
2691346:15.
Furat, O., Finegan, D. P., Yang, Z., Kirstein, T., Smith, K.,
and Schmidt, V. (2022). Super-resolving microscopy
images of li-ion electrodes for fine-feature quantifica-
tion using generative adversarial networks. npj Com-
putational Materials, 8(1).
Jiao, Q., Zhong, J., Liu, C., Wu, S., and Wong, H.
(2022). Perturbation-insensitive cross-domain image
enhancement for low-quality face verification. Inf.
Sci., 608:1183–1201.
Liang, B., Jia, X., and Lu, Y. (2021). Application of adap-
tive image restoration algorithm based on sparsity of
block structure in environmental art design. Complex.,
2021:9035163:1–9035163:16.
Liu, L. (2022). Computer-aided mural digital restoration
under generalized regression neural network. Mathe-
matical Problems in Engineering, 2022:1–8.
Luo, X., Zhang, X. C., Yoo, P., Martin-Brualla, R.,
Lawrence, J., and Seitz, S. M. (2021). Time-travel
rephotography. ACM Trans. Graph., 40(6):213:1–
213:12.
Nogales, A., Delgado-Martos, E., Melchor,
´
A., and Garc
´
ıa-
Tejedor,
´
A. J. (2021). ARQGAN: an evaluation of
generative adversarial network approaches for auto-
matic virtual inpainting restoration of greek temples.
Expert Syst. Appl., 180:115092.
Pautrat-Lertora, A., Perez-Lozano, R., and Ugarte, W.
(2022). EGAN: generatives adversarial networks for
text generation with sentiments. In KDIR, pages 249–
256. SCITEPRESS.
Poornapushpakala, S., Barani, S., Subramoniam, M., and
Vijayashree, T. (2022). Restoration of tanjore paint-
ings using segmentation and in-painting techniques.
Heritage Science, 10(1).
Qin, Z., Zeng, Q., Zong, Y., and Xu, F. (2021). Image in-
painting based on deep learning: A review. Displays,
69:102028.
Rao, J., Ke, A., Liu, G., and Ming, Y. (2023). MS-GAN:
multi-scale GAN with parallel class activation maps
for image reconstruction. Vis. Comput., 39(5):2111–
2126.
Shen, Z., Xu, T., Zhang, J., Guo, J., and Jiang, S. (2019). A
multi-task approach to face deblurring. EURASIP J.
Wirel. Commun. Netw., 2019:23.
Su, B., Liu, X., Gao, W., Yang, Y., and Chen, S. (2022).
A restoration method using dual generate adversarial
PhotoRestorer: Restoration of Old or Damaged Portraits with Deep Learning
111