2 ROTATION INVARIANT IRIS
SIGNATURES
Iris texture is first converted into a polar iris image
which is a rectangular image containing iris texture
represented in a polar coordinate system. Note that
the ISO/IEC 19794-6 standard defines two types of
iris imagery: rectilinear images (i.e. images of the en-
tire eye like those contained in the CASIA database)
and polar images (which are basically the result of
iris detection and segmentation). As a further pre-
processing stage, we compute local texture patterns
(LTP) from the iris texture as described in (Du et al.,
2006). We define two windows T (X,Y ) and B(x, y)
with X > x and Y > y (we use 15 × 7 pixels for T and
9 × 3 pixels for B). Let mT be the average gray value
of the pixels in window T . The LTP value of pixels in
window B at position (i, j) is then defined as
LTP
i, j
= |I
i, j
− mT |
where I
i, j
is the intensity of the pixel at position (i, j)
in B. Note that due to the polar nature of the iris tex-
ture, there is no need to define a border handling strat-
egy. LTP represents thus the local deviation from the
mean in a larger neighbourhood.
In order to cope with non-iris data contained in the
iris texture, LTP values are set to non-iris in case 40%
of the pixels in B or 60% of the pixels in T are known
to be non-iris pixels.
2.1 The Original 1D Case
The original algorithm (Du et al., 2006) computes
the mean of the LTP values of each row (line) of the
polar iris image and concatenates those mean values
into a 1D signature which serves as the iris template.
Clearly, this vector is rotation invariant since the mean
over the rows (lines) is not at all affected by eye tilt. If
more then 65% of the LTP values in a row are non-iris,
this signature element is ignored in the distance com-
putation. In order to assess the distance between two
signatures, the Du measure is suggested (Du et al.,
2006) which we apply in all variants.
2.2 The 2D Extension
LTP row mean and variance capture first order statis-
tics of the LTP histogram. In order to capture more
properties of the iris texture without losing rota-
tion invariance we propose to employ the row-based
LTP histograms themselves as features (since his-
tograms are known to be rotation invariant as well
and have been used in iris recognition before (Ives
et al., 2004)). This adds a second dimension to the
signatures of course (where the first dimension is the
number of rows in the polar iris image and the second
dimension is the number of bins used to represent the
LTP histograms).
In fact, we have a sort of multi-biometrics-
situation resulting from these 2D signatures, since
each histogram could be used as a feature vector on
its own. We suggest two fusion strategies for our 2D
signatures:
1. Concatenated histograms: the histograms are sim-
ply concatenated into a large feature vector. The
Du measure is applied as it is in the original ver-
sion of the algorithm.
2. Accumulated errors: we compute the Du measure
for each row (i.e. each single histogram) and ac-
cumulate the distances for all rows.
The iris data close to the pupil are often said to be
more distinctive as compared to “outer” data. There-
fore we propose to apply a weighting factor > 1 to
the most “inner” row, a factor = 1 to the “outer”-most
row and derive the weights of the remaining rows by
linear interpolation. These weights are applied to the
“accumulated errors” fusion strategy by simply mul-
tiplying the distances obtained for each row by the
corresponding weight.
3 EXPERIMENTAL STUDY
3.1 Setting and Methods
For all our experiments we considered images with 8-
bit grayscale information per pixel from the CASIA
1
v1.0 iris image database. We applied the experimen-
tal calculations on the images of 108 persons in the
CASIA database using 7 iris images of each person
which have all been cropped to a size of 280 × 280
pixels.
The employed iris recognition system builds upon
Libor Masek’s MATLAB implementation
2
of a 1D
version of the Daugman iris recognition algorithm.
First, this algorithm segments the eye image into the
iris and the remainder of the image (“iris detection”).
Subsequently, the iris texture is converted into a po-
lar iris image. Additionally, a noise mask is generated
indicating areas in the iris polar image which do orig-
inate from eye lids or other non-iris texture noise.
Our MATLAB implementation uses the extracted
iris polar image (360 × 65 pixels) for further process-
ing and applies the LTP algorithm to it. Following the
1
http://www.sinobiometrics.com
2
http://www.csse.uwa.edu.au/˜pk/
studentprojects/libor/sourcecode.html
ROTATION-INVARIANT IRIS RECOGNITION - Boosting 1D Spatial-Domain Signatures to 2D
233