Demir, U. and
¨
Unal, G. B. (2018). Patch-based image in-
painting with generative adversarial networks. CoRR,
abs/1803.07422.
Efros, A. A. and Leung, T. K. (1999). Texture synthesis by
non-parametric sampling. In ICCV, page 1033.
Fedorov, V., Arias, P., Facciolo, G., and Ballester, C. (2016).
Affine invariant self-similarity for exemplar-based in-
painting. In Proceedings of the 11th Joint Conference
on Computer Vision, Imaging and Computer Graphics
Theory and Applications, pages 48–58.
Fedorov, V., Facciolo, G., and Arias, P. (2015). Variational
Framework for Non-Local Inpainting. Image Proces-
sing On Line, 5:362–386.
Getreuer, P. (2012). Total Variation Inpainting using Split
Bregman. Image Processing On Line, 2:147–157.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In Adv in
neural inf processing systems, pages 2672–2680.
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and
Courville, A. C. (2017). Improved training of wasser-
stein gans. In Adv in Neural Inf Processing Systems,
pages 5769–5779.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resi-
dual learning for image recognition. In CVPR.
Huang, J. B., Kang, S. B., Ahuja, N., and Kopf, J. (2014).
Image completion using planar structure guidance.
ACM SIGGRAPH 2014, 33(4):129:1–129:10.
Iizuka, S., Simo-Serra, E., and Ishikawa, H. (2017). Glo-
bally and locally consistent image completion. ACM
Trans. Graph., 36(4):107:1–107:14.
Kawai, N., Sato, T., and Yokoya, N. (2009). Image inpain-
ting considering brightness change and spatial locality
of textures and its evaluation. In Adv in Image and Vi-
deo Technology, pages 271–282.
Kingma, D. P. and Welling, M. (2013). Auto-encoding va-
riational bayes. arXiv:1312.6114.
Ledig, C., Theis, L., Husz
´
ar, F., Caballero, J., Cunning-
ham, A., Acosta, A., Aitken, A. P., Tejani, A., Totz,
J., Wang, Z., et al. (2017). Photo-realistic single
image super-resolution using a generative adversarial
network. In CVPR, volume 2, page 4.
Li, Y., Liu, S., Yang, J., and Yang, M.-H. (2017). Generative
face completion. In CVPR, volume 1, page 3.
Liu, M.-Y., Breuel, T., and Kautz, J. (2017). Unsupervi-
sed image-to-image translation networks. In Adv in
Neural Inf Processing Systems, pages 700–708.
Liu, Z., Luo, P., Wang, X., and Tang, X. (2015). Deep lear-
ning face attributes in the wild. In ICCV.
Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., and Smol-
ley, S. P. (2017). Least squares generative adversarial
networks. In ICCV, pages 2813–2821.
Masnou, S. and Morel, J.-M. (1998). Level lines based
disocclusion. In Proc. of IEEE ICIP.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and
Ng, A. Y. Reading digits in natural images with unsu-
pervised feature learning. In NIPS workshop on deep
learning and unsupervised feature learning.
Nguyen, A., Yosinski, J., Bengio, Y., Dosovitskiy, A., and
Clune, J. (2016). Plug & play generative networks:
Conditional iterative generation of images in latent
space. arXiv:1612.00005.
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., and
Efros, A. A. (2016). Context encoders: Feature lear-
ning by inpainting. In CVPR.
P
´
erez, P., Gangnet, M., and Blake, A. Poisson image edi-
ting. In ACM SIGGRAPH 2003.
Pumarola, A., Agudo, A., Sanfeliu, A., and Moreno-
Noguer, F. (2018). Unsupervised Person Image Synt-
hesis in Arbitrary Poses. In CVPR.
Radford, A., Metz, L., and Chintala, S. (2015). Unsuper-
vised representation learning with deep convolutional
generative adversarial networks. arXiv:1511.06434.
Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B.,
and Lee, H. (2016). Generative adversarial text to
image synthesis. In Proceedings of The 33rd Intern.
Conf. Machine Learning, pages 1060–1069.
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V.,
Radford, A., Chen, X., and Chen, X. Improved techni-
ques for training gans. In Adv in Neural Inf Processing
Systems 29.
van den Oord, A., Kalchbrenner, N., Espeholt, L., kavuk-
cuoglu, k., Vinyals, O., and Graves, A. Conditional
image generation with pixelcnn decoders. In Adv in
Neural Inf Processing Systems 29.
Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., and
Catanzaro, B. High-resolution image synthesis and se-
mantic manipulation with conditional gans. In CVPR.
Wang, Z. (2008). Image affine inpainting. In Image Analy-
sis and Recognition, volume 5112 of Lecture Notes in
Computer Science, pages 1061–1070.
Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli,
E. P. (2004). Image quality assessment: from error
visibility to structural similarity. IEEE Trans. on IP,
13(4):600–612.
Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., and
Li, H. (2017). High-resolution image inpainting using
multi-scale neural patch synthesis. In CVPR, vo-
lume 1, page 3.
Yeh, R. A., Chen, C., Lim, T.-Y., Schwing, A. G.,
Hasegawa-Johnson, M., and Do, M. N. (2017). Se-
mantic image inpainting with deep generative models.
In CVPR, volume 2, page 4.
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., and Huang, T. S.
Generative image inpainting with contextual attention.
Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. (2017a).
Unpaired image-to-image translation using cycle-
consistent adversarial networks. arXiv preprint.
Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. (2017b).
Unpaired image-to-image translation using cycle-
consistent adversarial networks. In ICCV.
VISAPP 2019 - 14th International Conference on Computer Vision Theory and Applications
260