3.3 Rectangular Objects Generation
Some good examples of rectangular regions for urban
imagery are roofs and streets. However, in some cases
the segmented objects do not preserve such rectangu-
lar shape; they are broken apart into smaller irregu-
lar objects. Therefore, our aim is to join such over-
segmented regions.
In order to identify the object rectangularity de-
gree we calculate the ratio between its area AREA(P
i
)
and its bounding box area AREA(BOX (P
i
)). Due
to the rotation, this measure can not correctly repre-
sent the object regularity. Thus, a preprocessing step
is performed in order to transform the retangularity
measure invariant to rotation.
Given an object P
i
and its internal points coordi-
nates C = {{x, y}|{x, y} ∈ P} the eigenvectors are cal-
culated. Taking the first eigenvector the main angle of
P
i
, α is obtained. Thus, a new region R
i
with bound-
ing box BOX(R
i
) is created by rotating it in relation
to the angle α. Afterward, the unbiased parameter Q
is obtained as following:
Q =
AREA(R
i
)
AREA(BOX(R
i
))
(1)
The range of Q is [0, 1]. The more rectangular the
object P
i
, the closer to 1 is the parameter Q. Figure 5
shows an example of a rectangular object with Q ≈ 1.
(a)
(b) (c)
Figure 5: Rectangular objects identification: a) input re-
gion, b) the region and its bounding box, c) the rotated re-
gion and its new bounding box and Q ≈ 1.
(a) (b)
Figure 6: Objects merging process: a) Connected re-
gions, b) re-segmentation taking into account the rectangu-
lar shaped regions.
At this stage, the algorithm aims to find rectangu-
lar objects. If irregular objects are found, two opera-
tions are performed: cutting and merging. The objec-
tive of these operations is to generate regions whose
parameter Q is about 1. As we can observe in Figure
6, some regions belonging to the same class can be
split into smaller regions instead of being merged in
the initial segmentation process. These are the main
problem that our algorithm proposes to solve.
4 PRELIMINARY RESULTS
This Section presents some experimental results ob-
tained by our re-segmentation approach. The method
was tested for Quickbird images of urban regions.
For the first experiment, a segmented image superim-
posed on the original image (300 × 250 pixels), some
connected nodes (represented in different colors) and
the resultant re-segmentation are shown in Figures 7a,
7b and 7c, respectively. The over-segmentation was
obtained by the region growing method implemented
in SPRING (Câmara et al., 1996). In Figure 7c we ob-
serve that some regions did not merge although they
look like spectrally similar. This is due to the fact
that the approach aims to merge only those regions
which originate rectangular objects. Other merging
or cutting operations that originate irregular objects
are not performed. Consequently, the segmentation
gets more adequate results. In this case, the algorithm
took 37 seconds to generate the re-segmentation.
The second experiment took an image (256 ×256
pixels) as shown in Figure 8. Figures 8a and 8b dis-
play the regions superimposed on the original image
and the resultant re-segmentation, respectively. The
over-segmentation also was obtained by the region
growing method implemented in SPRING. The im-
age presents several instances of roofs that are broken
apart in the segmentation process as shown in Figure
7a. It is important to emphasize that the segmentation
is the key step for further image analysis. After apply-
ing our approach, posterior stages of image recogni-
tion or even geographical tasks can be more accurate
or adequate to the application. In spite of using small
image in this experiment, the number of input over-
segmented regions is very high. In this case, the algo-
rithm spent 216 seconds to accomplish the complete
re-segmentation process.
5 CONCLUSIONS
A new approach for image re-segmentation and some
aspects of its implementation have been described.
Moreover, in order to show the potential of our ap-
proach two experimental results have been presented.
VISAPP 2008 - International Conference on Computer Vision Theory and Applications
470