be learned in upcoming modifications of the Edge-
Connect and 3GAN approach. Our further intentions
for future work include the implementation of seman-
tic segmentation of fac¸ades to provide a complete
pipeline from sensor data to cleaned fac¸ades, as well
as generalization of the 3GAN approach to a wider
class of problems.
ACKNOWLEDGEMENTS
We express our deep gratitude to Dr. Susanne Wen-
zel for providing us the eTRIMS dataset (Kor
ˇ
c and
F
¨
orstner, 2009) and the labeling tool. We thank the
authors of (Liu et al., 2018) and (Nazeri et al., 2019)
who put their code online.
REFERENCES
Bulatov, D., H
¨
aufel, G., Meidow, J., Pohl, M., Solbrig, P.,
and Wernerus, P. (2014). Context-based automatic
reconstruction and texturing of 3D urban terrain for
quick-response tasks. ISPRS Journal of Photogram-
metry and Remote Sensing, 93:157–170.
Canny, J. (1986). A computational approach to edge de-
tection. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 8(6):679–698.
Chan, T. F. and Shen, J. (2001). Nontexture inpainting by
curvature-driven diffusions. Journal of Visual Com-
munication and Image Representation, 12(4):436–
449.
Criminisi, A., P
´
erez, P., and Toyama, K. (2004). Region
filling and object removal by exemplar-based image
inpainting. IEEE Transactions on Image Processing,
13(9):1200–1212.
Elharrouss, O., Almaadeed, N., Al-Maadeed, S., and Ak-
bari, Y. (2019). Image inpainting: A review. Neural
Processing Letters, 51:2007–2028.
Fathalla, R. and Vogiatzis, G. (2017). A deep learn-
ing pipeline for semantic facade segmentation. In
Proc. British Machine Vision Conference, page
120.1—120.13.
Gadde, R., Marlet, R., and Paragios, N. (2016). Learning
grammars for architecture-specific facade parsing. In-
ternational Journal of Computer Vision, 117(3):290–
316.
Gatys, L. A., Ecker, A. S., and Bethge, M. (2015). A
neural algorithm of artistic style. arXiv preprint
arXiv:1508.06576.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial networks. arXiv
preprint arXiv:1406.2661.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and
Hochreiter, S. (2017). GANs trained by a two time-
scale update rule converge to a local Nash equilibrium.
arXiv preprint arXiv:1706.08500.
Huang, Z., Qin, C., Liu, R., Weng, Z., and Zhu, Y.
(2021). Semantic-aware context aggregation for im-
age inpainting. In Proc. IEEE International Con-
ference on Acoustics, Speech and Signal Processing,
pages 2465–2469. IEEE.
Iizuka, S., Simo-Serra, E., and Ishikawa, H. (2017). Glob-
ally and locally consistent image completion. ACM
Transactions on Graphics (TOG), 36(4):1–14.
Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2017).
Image-to-image translation with conditional adversar-
ial networks. In Proc. IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), pages 1125–
1134.
Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual
losses for real-time style transfer and super-resolution.
In Proc. IEEE European Conference on Computer Vi-
sion (ECCV), pages 694–711. Springer.
Kingma, D. P. and Ba, J. (2014). Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Kor
ˇ
c, F. and F
¨
orstner, W. (2009). eTRIMS Image Database
for interpreting images of man-made scenes. Techni-
cal Report TR-IGG-P-2009-01, Dept. of Photogram-
metry, University of Bonn.
Kor
ˇ
c, F. and Schneider, D. (2007). Annotation tool. Techni-
cal Report TR-IGG-P-2007-01, Dept. of Photogram-
metry, University of Bonn.
Kottler, B., Bulatov, D., and Schilling, H. (2016). Improv-
ing semantic orthophotos by a fast method based on
harmonic inpainting. In Proc. 9th IAPR Workshop on
Pattern Recognition in Remote Sensing (PRRS), pages
1–5. IEEE.
Kottler, B., Bulatov, D., and Zhang, X. (2020). Context-
aware patch-based method for fac¸ade inpainting. In
VISIGRAPP (1: GRAPP), pages 210–218.
Li, J., Wang, N., Zhang, L., Du, B., and Tao, D. (2020).
Recurrent feature reasoning for image inpainting. In
Proc. IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 7760–7768.
Liao, L., Xiao, J., Wang, Z., Lin, C.-W., and Satoh, S.
(2020). Guidance and evaluation: Semantic-aware im-
age inpainting for mixed scenes. In Proc. 16th Euro-
pean Conferenceon on Computer Vision, Part XXVII
16, pages 683–700. Springer.
Liu, G., Reda, F. A., Shih, K. J., Wang, T.-C., Tao, A., and
Catanzaro, B. (2018). Image inpainting for irregular
holes using partial convolutions. In Proc. IEEE Euro-
pean Conference on Computer Vision (ECCV), pages
85–100.
Michaelsen, E., Iwaszczuk, D., Sirmacek, B., Hoegner, L.,
and Stilla, U. (2012). Gestalt grouping on facade tex-
tures from IR image sequences: Comparing different
production systems. International Archives of Pho-
togrammetry, Remote Sensing and Spatial Informa-
tion Sciences, 39(B3):303–308.
Nazeri, K., Ng, E., Joseph, T., Qureshi, F. Z., and Ebrahimi,
M. (2019). EdgeConnect: Generative image inpaint-
ing with adversarial edge learning. arXiv preprint
arXiv:1901.00212.
VISAPP 2022 - 17th International Conference on Computer Vision Theory and Applications
434