erode), as shown in Figure 4(c). Secondly, it is impor-
tant to notice that stomata images have texture on their
epidermis, what precludes the direct (or immediate)
application of the Watershed. Therefore, six morpho-
logical operators were applied into the same a* chan-
nel prior to the Watershed, in this order: Open, Erode,
Reconstruct, Close, Dilate and Reconstruct, using a
kernel k = 2. These operators were applied to smooth
the image.
Figure 4(d-g) illustrates the images resulting from
the application of the morphological operators. Then,
we applied the first Watershed producing an image
with watershed lines. We override the a* channel im-
age with both watershed lines and spots location.
Next, we used a second Watershed to find the re-
gions using the spots as markers, as illustrated in Fig-
ure 4(h). Finally, we combined the mask obtained
from the previous step with the original image to gen-
erate the result shown in Figure 4(i).
4 RESULTS
In this section we present the results divided into two
groups: 1) Detection/Counting, and 2) Segmentation.
The first group is a necessary (previous) step for the
second one. Both counting and segmentation are ap-
plied to the same dataset containing 64 image crops
(1024 x 1024 pixels each image). The same dataset
was used in both studies.
For performance evaluation of the counting and
segmentation approaches, we use recall, precision
(Baeza-Yates and Ribeiro-Neto, 1999) and the F-
Measure (Arbelaez et al., 2011), which are defined
as follows:
Recall =
T P
T P + FN
(8)
Precision =
T P
T P + FP
(9)
F =
2 ∗ Recall ∗ Precision
Recall + Precision
(10)
where T P is a true positive, FN is a false negative,
and FP is false positive. Note that in segmentation, a
T P indicates that a pixel is identified as representing
a stoma by both algorithm and gold standard; FN cor-
responds to the case where the algorithm finds a back-
ground pixel whereas the gold standard found a stoma
pixel; A FP occurs whenever the algorithm identifies
a stoma pixel, but it is in fact a background pixel set
by the gold standard.
In counting, a FP occurs when a dot is found in
an area where there is no stoma. A single dot found
within the actual stoma area represents a T P. Finally,
a FN represents the absence of a dot in an actual
stoma area.
Table 1 shows a sample with 5 randomly chosen
images, retrieved from the 64 image-set that was used
in this work. Its purpose is only to convey an idea of
how the results were calculated, i.e. the full range of
results are actually shown in the following subsection.
4.1 Detection and Counting
In the detection and counting step, we compare our
approach with two others methods: 1) manual, con-
sidered a gold standard, and 2) the method by Oliveira
et al (Oliveira et al., 2014). In order to test the pro-
posed method, we discarded all stomata that were
split in two across the center in the cropped image
before applying it to the image crops (1024 x 1024
pixels). Those stomata that were only partially cut by
the crop (but with the whole center preserved) were
considered. The results are shown in Table 1, which
compares the performance of the algorithm versus the
manual procedure using the recall and precision mea-
sures.
In Figure 4(b) we illustrate the results in terms
of the stomata identified directly on the image of the
plant tissue. As we have mentioned earlier, we used
64 cropped images of the plant (Ugni Molinae), and
the results show an improvement over the work by
Oliveira (Oliveira et al., 2014). The proposed ap-
proach was compared to manual detection for all the
samples, and the following results were obtained:
1. Recall: The average recall reached by our method
was 98.24% against 95.13% obtained by Oliveira
(Oliveira et al., 2014). Furthermore, recall was
improved upon the latter work in 84.37 % of the
images.
2. Precision: The average precision reached by the
proposed method was 98.34% against 92.81 %
from Oliveira (Oliveira et al., 2014). Moreover,
compared to the latter work, the proposed method
showed a more satisfactory precision in 90.62%
of images.
3. F-Measure: The average F-Measure found by
our method was 98.25% against 93.80% from
Oliveira (Oliveira et al., 2014). It is also impor-
tant to notice that 84.37% of images scored an
improved F-measure in comparison to latter work.
The graph shown in Figure 5 illustrates the com-
parison of the methods for each image using the
F-Measure.
VISAPP 2017 - International Conference on Computer Vision Theory and Applications
544