=
+
(
1−
)
(7)
For the tests performed using L-PREEN we chose
a relative cost α = 0.5,
The average F-value obtained is F=0.64 (0.60,
0.69), as showed in Fig. 4. Fig. 5 shows some of the
segmentation results obtained using L-PREEN We
are going to compare the L-PREEN results with a
method for boundary detection based in tensor voting
with perceptual grouping in natural scenes that has
been made available publicly. Perceptual groupings
achieve to extract illusory figures o completed
boundaries following the Gestalt visual perception
principles. Therefore, the comparison method
proposes a non-neural scheme of perceptual
groupings of natural figures in cluttered backgrounds
different from L-PREEN. Loss et al. (Loss et al.,
2009) proposed an iterative method based in
multiscale tensor voting. Loss et al.’s approach
consists in iterative removing image segments and
applying a new voting over the rest of the segments,
in order to estimate the most reliable saliency. The
tensor representation chosen was subsets of pixels to
form the tensor with ball or stick tensors initialization.
The decision on this representation is based in
reducing the number of tensors, what at the end
reduces the computation time. In this work they
present an evaluation of their method using two
datasets: fruit synthetic images and the BSDS300
Berkeley dataset. In the latter evaluation, they use
five base segmentation methods (Gradient Magnitude
(GM), Multi-scale Gradient Magnitude (MGM),
Texture Gradient (TG), Brightness Gradient (BG) and
Brightness/Texture Gradient (BTG), to generate a
Boundary Posterior Probability map (`segmentation
feeders’). This map is employed as a preprocessing
step for their method. In order to quantify the results,
they obtained the F-value and Precision-Recall
graphs. The F-values obtained using the five methods
over the 100 test images of the Berkeley Dataset
where 0.57, 0.58, 0.57, 0.60 and 0.62 respectively. L-
PREEN obtains an F-value of 0.64, as it is showed in
Table 1, which is a better result than all the five
versions of the comparison method.
We took the Matlab code of the gPb method
(Global Probability of Boundary) (Arbelaez et al.,
2011), third position in the ranking published in the
Berkeley-Benchmark http://www.eecs.berkeley.edu/
Research/Projects/CS/vision/bsds/bench/html/image
s.html) offered by their authors in the University of
Berkeley website and performed some executions for
the 100 test images, obtaining an average execution
time of 403.29 s per image, while L-PREEN needs
159.12 s.
Figure 4: Precision-recall curve.
Table 1: Comparative results.
Method F-value
Loss et al’s method with GM method 0.57
Loss et al’s method with MGM
method
0.58
Loss et al’s method with TG method 0.57
Loss et al’s method with BG method 0.60
Loss et al’s method with BTG method 0.62
L-PREEN 0,64
3 CONCLUSIONS
This work presents a new model, L-PREEN, for
detecting the boundaries and the surface perception of
colour natural images. This model is bio-inspired on
processes in V1, V2, V4 and IT visual areas of the
Human Visual System.
L-PREEN model includes orientational filtering,
competition among orientations and positions, and
cooperation through bipole profile fields and contour
learning. The proposed architecture has been
compared with Loss et al.’s method (Loss et al.,
2009), obtaining better results. A major advantage of
the L-PREEN model is its speed when compared to
other methods. L-PREEN can be implemented using
matrix and convolution operations, making it
compatible and scalable with parallel processing
hardware. This research will be our future work.