
can be easily measured, it might not be a simple
matter to measure them in case of Fig. 5-(b) and the
obvious thing is single global threshold can not even
discriminate them. Fig. 6 shows the intensity profile
of the codeword ( 824 : 1, 5, 1, 2, 1, 1, 1, 5 )
comprised of four bars and four spaces, together
with their widths and classification results.
As shown in Fig. 6-(a), it is impossible to select
the single optimal threshold for detecting the widths
of four bars and four spaces because the pixel values
change dynamically according to the widths of bars
and spaces.
Figure 4: Warping the data region of Fig. 2 using Affine
transform.
The widths and peaks of narrow bars or spaces
corresponding to 1 or 2 module values get smaller
compared to the wide ones even under the same
illumination, due to convolution with the point
spread function. The proposed algorithm employs
the high curvature points and local extreme points to
extract four bars and four spaces from the warped
barcode image. The points of high curvature on the
waveform represent the midpoints of the bar or
space, whereas the local extreme points are
estimates of their centers.
Figure 5: Segmented bar-space patterns and their
corresponding profiles obtained from (a) the focussed
image, (b) the outfocussed image.
At first, our algorithm localizes four local minimum
points and four local maximum points by scanning
the profile of each bar-space pattern, as shown in Fig.
6-(b). The local minimum point whose value is large
compared to the other minimum points or the
maximum point smaller than the adjacent ones are
compensated to increase their dynamic range as
depicted in Fig. 6-(c). Then, the break regions are
detected between the compensated extreme points,
where the break region means the region whose
profile is horizontal. The regions marked as
rectangular box in Fig. 6-(d) represent the break
regions. Finally, the break regions are partitioned
according to the following rules. Usually, the edges
between bars and spaces lie in the break regions. In
Fig. 7, we define
a
= x2 – x1 and = x3 – x2.
If
b∆
a
is greater than
b
, the more part of the break
region belongs to the space region, vice versa. The
ratios are obtained by experimenting with several
hundreds of bar-space patterns, extracted from the
various barcode images scanned under varying
conditions. Thus, the widths of bars and spaces are
represented as real values rather than integer ones to
measure the bar-space patterns as correctly as
possible.
4 EXPERIMENTAL RESULTS
The “edge to similar edge” estimation method is
employed to check whether the detected bar-space
pattern is correct. The detected bar-space patterns
can be converted to the encoded codewords by using
the lookup table specified in (AIM USA, 1994). The
codewords are checked whether there are any errors
through Reed-Solomon error correction coding
algorithm. Given the error-corrected codewords,
they are decoded in the manner as specified in (AIM
USA, 1994), to get the message.
In order to verify our algorithm , we benchmark-
tested our decoder with the test barcode images. Our
database is composed of 153 barcode images, which
were taken under various conditions. In other words,
they are rotated, outfocussed, warped or even
severely damaged by cutting off some region, as
shown in Fig. 8. Almost of them are taken under
badly illuminated conditions. At first, 2,000 pieces
of profiles corresponding to the bar-space patterns
are extracted from our database. Each profile is
tested to extract the bar-space pattern and decode the
corresponding codeword. Among them, 1,466
profiles are detected correctly. As an example, the
image of Fig. 8-(a) is obtained by hardcopying the
barcode image several times, whose upper part is
severely degraded. When it is applied to our decoder,
42 erasures and 32 errors are detected among total
329 codewords, which can be decoded correctly
through Reed-Solomon error correction algorithm.
Fig. 8-(b) is an image reassembled but aligned
incorrectly after tearing it into two parts. Fig. 8-(c) is
obtained from Fig. 8-(a) by cutting off the right part
of it and Fig. 8-(d) are taken outfocussed under too
bright illuminating condition. Our algorithm
decoded 138 images correctly among total 153
images. This result is expected to be good for
ICINCO 2004 - SIGNAL PROCESSING, SYSTEMS MODELING AND CONTROL
132