8 has a symmetric background but the animal’s sym-
metry is artificial, although very coherent with the im-
age. The lizard structure of Figure 5 was tested in two
opposite configuration: perfect symmetric and lack of
symmetry. Our method runs in quadratic time, but is
very sensitive to the pruning step.
Table 1: Timings (seconds) on a 2.8 GHz Pentium D.
Model #Points #Symmetries Timing
Butterfly 506 12 44
Eagle 765 9 83
Turtle 575 8 46
Lizard hand 271 10 87
Lizard body 294 10 122
4.3 Discussion
We achieve good results even by considering only ax-
ial symmetries and simply copying the image texture
in the unknown region. When the symmetry struc-
tures traverses the holes, the completion of the fore-
ground is neat (see Figure 7 and 8). The quality ob-
tained in Figure 7(d) is a consequence of symmetry
being present in the background also. Only in a de-
tailed inspection, seams can be detected between the
visible and the reconstructed region. These seams can
only be noted in the texture, not in the background.
Our method completes images based on symme-
tries from the image’s edges, and supposes that the
object’s texture is likely to follow the same transfor-
mation. However, this may not be the case. For exam-
ple in Figure 6, the missing wing of the eagle was well
reconstructed from the visible one, although the syn-
thesized background differs in the tone of blue from
the original one. Blending would solve this case.
Our method only works with images where sym-
metry is present. As most objects have symmetries,
this is not a big restriction. In fact coherent results
were found only when one symmetry axis dominated
the hole. The completed objects above are all seen
from well behaved view points. An object can be
symmetric from a point of view while not being from
others. One simple extension is to ask the user to
mark four points defining the plane where the sym-
metry holds. We would then work on a transformed
space where the symmetry axis is contained in the im-
age plane. One advantage of the method is that the
user knows before hand if it will work since he can
usually see the symmetries himself.
5 CONCLUSIONS
In this work, we propose to incorporate global struc-
tural information of an image into inpainting tech-
niques. In particular, we present a method for inpaint-
ing images that deals with large unknown regions by
using symmetries of the picture to complete it. This
scheme is fully automated requiring from user only
the specification of the hole. The current technique
restricts itself to the analysis of axial symmetries of
the image’s edges, focusing on structure rather than
texture. On the one hand, the transformation space
can be easily extended using the same framework, in-
corporating translations, rotations and eventually pro-
jective transformations at the cost of using a higher
dimensional space of transformations. On the other
hand, texture descriptors could be used to improve
both the symmetry detection and the image synthesis
(see Figure 10). Moreover, the insertion of the synthe-
sized parts into the image can be improved by exist-
ing inpainting techniques. Another line of work, fol-
lowing (Hays and Efros, 2007), is to build a database
of object boundaries. Completion would proceed by
matching the visible part of the object with those in
the database.
REFERENCES
Bertalmio, M., Sapiro, G., Caselles, V., and Ballester, C.
(2000). Image inpainting. In SIGGRAPH. ACM.
Comaniciu, D. and Meer, P. (2002). Mean shift: a robust
approach toward feature space analysis. PAMI.
Criminisi, A., P
´
erez, P., and Toyama, K. (2003). Object
removal by examplar-based inpainting. In CVPR.
Drori, I., Cohen-Or, D., and Yeshurun, H. (2003).
Fragment-based image completion. TOG.
Efros, A. A. and Freeman, W. T. (2001). Image quilting for
texture synthesis and transfer. In SIGGRAPH. ACM.
Gal, R. and Cohen-Or, D. (2006). Salient geometric features
for partial shape matching and similarity. TOG.
Hays, J. and Efros, A. (2007). Scene completion using mil-
lions of photographs. In SIGGRAPH, page 4. ACM.
Kazhdan, M., Funkhouser, T., and Rusinkiewicz, S. (2004).
Symmetry descriptors and 3d shape matching. In
SGP. ACM/Eurographics.
Loy, G. and Eklundh, J.-O. (2006). Detecting symmetry
and symmetric constellations of features. In European
Conference on Computer Vision, pages 508–521.
Mattis, P. and Kimball, S. (2008). Gimp, the GNU Image
Manipulation Program.
Mitra, N., Guibas, L., and Pauly, M. (2006). Partial and ap-
proximate symmetry detection for 3d geometry. TOG.
Mitra, N., Guibas, L., and Pauly, M. (2007). Symmetriza-
tion. TOG, 26(3):63.
SYMMETRY-BASED COMPLETION
43