Robust Iris Segmentation under Unconstrained Settings

Jo

˜

ao C. Monteiro, H

´

elder P. Oliveira, Ana F. Sequeira and Jaime S. Cardoso

INESC TEC (formerly INESC Porto) and Faculdade de Engenharia, Universidade do Porto, Porto, Portugal

Keywords:

Biometrics, Iris Segmentation, Unconstrained Environment, Gradient Flow, Shortest Closed Path.

Abstract:

The rising challenges in the ﬁeld of iris recognition, concerning the development of accurate recognition

algorithms using images acquired under an unconstrained set of conditions, is leading to the a renewed interest

in the area. Although several works already report excellent recognition rates, these values are obtained by

acquiring images in very controlled environments. The use of such systems in daily security activities, such

as airport security and bank account management, is therefore hindered by the inherent unconstrained nature

under which images are to be acquired. The proposed work focused on mutual context information from iris

centre and iris limbic contour to perform robust and accurate iris segmentation in noisy images. A random

subset of the UBIRIS.v2 database was tested with a promising E

1

classiﬁcation rate of 0.0109.

1 INTRODUCTION

In almost everyone’s daily activities, personal iden-

tiﬁcation plays an important role. The most tradi-

tional techniques to achieve this goal are knowledge-

based and token-based automatic personal identiﬁca-

tions. Token-based approaches take advantage of a

personal item, such as a passport, driver’s license,

ID card, credit card or a simple set of keys to dis-

tinguish between individuals. Knowledge-based ap-

proaches, on the other hand, are based on something

the user knows that, theoretically, nobody else has ac-

cess to, for example passwords or personal identiﬁca-

tion numbers.Both of these approaches present obvi-

ous disadvantages: tokens may be lost, stolen, forgot-

ten or misplaced, while passwords can easily be for-

gotten by a valid user or guessed by an unauthorized

one (Jain et al., 2000). In fact, all of these approaches

stumble upon an obvious problem: any piece of ma-

terial or knowledge can be fraudulently acquired.

Biometrics represents a return to a more natural

way of identiﬁcation. Testing someone by what this

someone is, instead of relying on something he owns

or knows seems likely to be the way forward.

Several biological traits in humans show a con-

siderable inter-individual variability: ﬁngerprints and

palmprints, the shape of the ears, the pattern of the

iris, among others. Biometrics works by recogniz-

ing patterns within these biological traits, unique to

each individual, to increase the reliability of recogni-

tion. The growing need for reliability and robustness,

raised some expectations and became the focal point

of attention for research works on biometrics. The

choice of a speciﬁc biometric trait is weighted by a

set of qualitative values that describe its overall qual-

ity: universality, uniqueness, collectability and per-

manence (Jain et al., 2000). With all these variables

in mind, the iris presents itself as a leading candidate

to become the standard biometric trait: it is universal,

the variability is huge which assures the uniqueness

for each individual, apart from being an organ easily

accessible and very difﬁcult to modify.

Even though excellent rates of recognition are

found in literature (Daugman, 2006) , these results

are associated with a set of acquisition conditions that

constrain the quality of the tested images. The ma-

jority of the developed iris recognition systems rely

on near-infrared (NIR) imaging rather than visible

light (VL). This is due to the fact that fewer reﬂec-

tions from the cornea in NIR imaging result in maxi-

mized signal-to-noise ratio (SNR) in the sensor, thus

improving the contrast of iris images and the robust-

ness of the system. NIR imaging, however, presents

a series of hazards, as no instinctive response (such

as blinking) is triggered in response to excessively

strong illumination.Another typically imposed con-

straint to the user of an iris recognition system is the

need to stop-and-stare at a close distance to the sensor

(i.e. user collaboration). These factors create impor-

tant limitations to the applicability of iris recognition

algorithms in real-life conditions, such as military ap-

plicationsor bank account management.The develop-

180

Monteiro J., Oliveira H., Sequeira A. and Cardoso J..

Robust Iris Segmentation under Unconstrained Settings.

DOI: 10.5220/0004281701800190

In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP-2013), pages 180-190

ISBN: 978-989-8565-47-1

Copyright

c

2013 SCITEPRESS (Science and Technology Publications, Lda.)

ment of iris recognition algorithms that are capable of

encompassing such limitations has been gaining focus

in recent years.

In this work we focus on iris segmentation, as pro-

posed in (Daugman, 1993). Iris segmentation con-

sists on the detection of the two deﬁning contours of

the iris region. In the eye region, three main regions

can be easily distinguished: the sclera, also known as

the white of the eye, is the most easily distinguishable

part of the eye, surrounding the iris, the coloured re-

gion of the eye; inside the iris, a darker region is dis-

tinguishable, corresponding to the pupil, the region

through which light enters the eye. Besides its dis-

tinctive properties as a biometric trait, the iris is also

a contractile structure responsible to adapt the size of

the pupil, so as to regulate the amount of light that

enters the eye. Two main contours can be deﬁned as

the separating boundaries of the three aforementioned

regions: the limbic contour separates the iris from the

sclera, and the pupillary contour, the iris from the

pupil. The detection of these contours is the main goal

of segmentation and an essential step in the develop-

ment of high accuracy recognition systems.

We argue that iris segmentation can beneﬁt from

the simultaneous detection of the iris centre and iris

external contour. When performed independently,

both tasks are nontrivial since many other parts of

the image may be falsely detected. However, the two

tasks can beneﬁt greatly from serving as context for

each other. Central to our method to detect iris cen-

tre candidates is the use of gradient ﬂow information

with a speciﬁc gradient vector ﬁeld template; the de-

tection of the limbic contour relies on the search of

strong closed contours around the centre candidates.

The remainder of this paper is organized as fol-

lows: Section 2 summarizes relevant works concern-

ing iris segmentation; Section 3 includes an algorithm

overview; the theoretical basis behind the developed

algorithm; a detailed analysis of the different steps of

the limbic contour segmentation algorithm; Section 4

presents the obtained results and ﬁnally the conclu-

sions and future prospects are summarized in Sec-

tion 5.

2 RELATED WORK

The original approach to the segmentation task was

proposed by Daugman (Daugman, 1993) and con-

sisted in the maximisation of an integro-differential

operator. In a different approach, Wildes (Wildes,

1997) suggested a method involving edge detection

followed by circular Hough transform (CHT). For

years, several works in the iris biometrics ﬁeld fo-

cused on Daugman’s and Wilde’s algorithms, present-

ing variations at many levels.

One example is the CHT-based method used for

the segmentation step in Masek’s algorithm (Masek,

2003). Ma et al. (Ma et al., 2004) created a sys-

tem that mixed both the CHT segmentation approach

and the rubber sheet model normalization, introduc-

ing some concepts like pre-processing of iris images

for specular reﬂection removal.

The integro-differential operator and the CHT are

still widely used for segmenting iris images, offering

good segmentation accuracy but also computational

complexity. Radman et al. (Radman et al., 2012) ad-

dresses a simple solution for this problem by local-

izing the initial center of the pupil using a circular

Gabor ﬁlter (CGF).

In the work of He et al. (He et al., 2009), an

Adaboost-cascade iris detector is built to extract a

rough position of the iris centre and then the centre

and radius of the circular iris are localised by employ-

ing an elastic model named “pulling and pushing”.

The segmentation of the pupil and iris by ﬁtting a ro-

tated ellipse, after a sequence of procedures for com-

pensating the detected noise, was proposed by Zuo

and Schmid (Zuo and Schmid, 2010).

Since iris boundaries are often not circular or el-

liptical, curve ﬁtting techniques can be valuable to

approximate real iris contours (Proenc¸a et al., 2010).

To further improve segmentation performance, recent

methods attempted to use active contour models to ac-

curately localise irregular iris boundaries (Daugman,

2007; Vatsa et al., 2008; Shah and Ross, 2009). The

approach taken by Chen et al. (Chen et al., 2010) con-

sisted in detecting the sclera region of the eye, thresh-

olding and ﬁltering the image to detect a rectangular

region for iris localization. An edge map of the region

of interest is then obtained with a horizontal Sobel op-

erator, and a dynamic programming variation of the

CHT algorithm was implemented to detect the limbic

boundary. This method corrects the non-circularities

of the off-angle iris and combines the intersection of

circles obtained by the two CHT algorithms and a

linear Hough transform to perform eyelid detection.

More recently, Pawar et al. (Pawar et al., 2012) ap-

plied geodesic active contours to perform segmenta-

tion.

Some works use texture analysis to perform seg-

mentation. Sanchez-Avila et al. (Sanchez-Avila et al.,

2002) published a work based on dyadic wavelet

transform zero-crossing as iris signature where im-

ages were pre-processed by histogram stretching (im-

proving contrast between pupil, iris and sclera) to aid

the limbic boundary detection and then, the same al-

gorithm is used inside its area to detect the pupillary

RobustIrisSegmentationunderUnconstrainedSettings

181

boundary. Nabti and Bouridane’s work (Nabti and

Bouridane, 2008) is based in a multiscale approach,

using Gabor ﬁlters and wavelet transform coefﬁcients,

to improve edge detection process that determines the

success of iris segmentation.

Gradient vector ﬁeld based methods have ap-

peared in literature such as in the work of Chen et

al. (Chen et al., 2011). In this work gradient ﬂow

around the iris center plays an important role in the

segmentation of the limbic contour.

When analysing most of the methods cited in the

literature, it is possible to detect some main draw-

backs. In almost all of these methods, inner and outer

boundaries, eyelashes and eyelids are detected in dif-

ferent steps, causing a considerable increase in pro-

cessing time of the system. Usually, the inner and

outer boundaries are detected by circle ﬁtting tech-

niques. This is a source of error, since the iris bound-

aries are not exactly circles and in noisy situations, the

outer boundary of iris may not present sharp edges.

In some of the aforementioned algorithms, there are

a lot of implicit or explicit assumptions about the ac-

quisition process, which are no longer valid in uncon-

strained acquisition scenarios. Therefore, some of the

promising results reported in the literature must be

taken with caution and reassessed under these new,

more challenging, conditions.

In recent years it has been recognized that the

path forward, regarding iris recognition, is the de-

velopment of algorithms that can work independently

of subject collaboration and proper NIR illumina-

tion conditions, in order to achieve robust (i.e. ac-

curate even with noisy images) and unconstrained

(i.e. accurate for several sets of acquisition condi-

tions: distance, movement, illumination) iris recog-

nition and, in this way, become a real-world applica-

ble method (Ross, 2010). This paradigm shift led to

the rise of new trends in the research of iris recogni-

tion, for example, exploring VL illumination instead

of NIR.

3 SIMULTANEOUS DETECTION

OF IRIS CENTRE AND LIMBIC

CONTOUR

Researchers are now paying more attention to the con-

text to aid visual recognition processes. Context plays

an important role in recognition by the human visual

system, with many important visual recognition tasks

critically relying on it.

The proposed work aimed to accomplish accurate

iris segmentation by using simultaneously acquired

information from two main sources: iris centre and

limbic contour. Both sources contribute to discrimi-

nate between a series of iris segmentation candidates.

Context information regarding typical iris character-

istics in eye images, namely colour and shape, repre-

sented the basis of the developed algorithm. By using

more than a single source of information, we aimed to

lower the misdetection of areas likely to be wrongly

segmented, such as eyebrows and glass frames.

3.1 Algorithm Overview

The main steps of the proposed algorithm are system-

atised in Figure 1. A simpliﬁcation is adopted in rela-

tion to the main rationale outlined above. The simul-

taneous detection of the iris centre and limbic contour

will be addressed by ﬁrst over-detecting centre candi-

dates, followed by a contour detection around each of

them.

The centre candidates are estimated by a method

resembling the use of convergence index ﬁlters (Ko-

batake and Hashimoto, 1999). Next, a window cen-

tred in each candidate is converted into the polar do-

main and a shortest path algorithm is used to deter-

mine good closed paths around the centre. Using

combined data from the centre and respective contour,

the best pair centre/contour is selected.

Figure 1: Flowchart of the proposed iris segmentation algo-

rithm.

Typical iris images present two very distinct re-

gions: a high intensity region corresponding to the

eye and the skin, and the iris region, at least partially

circular and lower in intensity. These two sources of

knowledge can be presented separately but are intrin-

sically connected. The fact that the iris is a darker

region against a brighter background translates into a

speciﬁc divergent gradient orientation from its centre.

At the same time the limbic contour (iris outer edge)

will present a high gradient magnitude as well as a

closed shape. The approach taken in this work was

that of detecting pairs of iris centre and limbic contour

candidates that maximise a quality factor weighted by

the aforementioned combined knowledge.

3.2 Iris Centre Detection

Iris centre candidates are detected using a template

VISAPP2013-InternationalConferenceonComputerVisionTheoryandApplications

182

matching algorithm based on gradient vector ﬁeld ori-

entation. Theoretically the gradient is a vector ﬁeld

that points in the direction of the greatest rate of in-

crease of a scalar ﬁeld. Considering an image as a

scalar ﬁeld, it is easy to perceive the gradient as a vec-

tor ﬁeld that points from darker regions (of lower in-

tensity) towards brighter regions (of higher intensity).

Figure 2(b) depicts a simple example of gradient vec-

tor ﬁeld orientation on a synthetic image. The ex-

pected behaviour of vectorial divergence from darker

regions to brighter regions is observed. These obser-

vations can be easily extrapolated to typical eye im-

ages.

(a) (b)

Figure 2: Gradient orientation vector ﬁeld in synthetic im-

ages. Notice how the vector ﬁeld diverges from darker re-

gions and converges to brighter regions.

The iris is surrounded by two distinct higher inten-

sity regions: the sclera and the skin. With this in mind

a divergent gradient orientation is expected from the

center of the iris towards the aforementioned brighter

regions, as observed in Figure 3(b).

(a) (b)

Figure 3: The iris centre detection is based on two vector

ﬁelds: a) Template vector ﬁeld and b) Gradient orientation

vector ﬁeld.

The centre candidates are, thus, detected by com-

puting the cross-correlation, c

corr

, between the gradi-

ent vector ﬁeld orientation and the divergent template

vector ﬁeld depicted in Figure 3(a). The c

corr

values

are calculated as:

c

corr

=( f ∗g)[n]

de f

=

∑

m

f

∗

[m]g[n + m] (1)

where f and g represent the gradient orientation vec-

tor ﬁeld and the template vector ﬁeld respectively.

The resulting c

corr

matrix can be graphically repre-

sented as exempliﬁed in Figure 4(a), where the values

range from −1 to 1, with −1 being represented in blue

and 1 in red. The centre candidates are detected as the

N local maxima with the highest c

corr

values.

(a)

Figure 4: Cross-correlation results on the synthetic image

from Figure 2(a).

3.3 Limbic Contour Detection

In the proposed method for limbic boundary detection

we consider the image grid as a graph, with pixels

as nodes and edges connecting neighbouring pixels.

With this in mind the proposed algorithm deﬁnes a

limbic contour candidate as the best closed contour

around a given centre candidate.

The computation of this best contour is simpliﬁed

by working in polar coordinates (relative to each iris

centre candidate). In this domain a closed contour

around a given point becomes a curve from the left

side of the polar image (θ = 0

◦

) to the right side of

the same image (θ = 360

◦

). With the aforementioned

consideration of the image as a graph, computation

of the best closed contour becomes computation of

the shortest left-to-right path in polar domain. To bet-

ter understand the proposed limbic contour detection

algorithm, we start by introducing some graph con-

cepts (Oliveira et al., 2012).

3.3.1 Graph Concepts

A graph G = (V, A) is composed of two sets V and

A. V is the set of nodes, and A the set of arcs (p, q),

p,q ∈ V . The graph is weighted if a weight w(p,q)

is associated to each arc. The weight of each arc,

w(p, q), is a function of pixels values and pixels rela-

tive positions. A path from vertex (pixel) v

1

to vertex

(pixel) v

n

is a list of unique vertices v

1

,v

2

,. .. , v

n

, with

v

i

and v

i+1

corresponding to neighbour pixels. The to-

tal cost of a path is the sum of each arc weight in the

path

∑

n

i=2

w(v

i−1

,v

i

).

A path from a source vertex v to a target vertex u

is said to be the shortest path if its total cost is min-

imum among all v-to-u paths. The distance between

a source vertex v and a target vertex u on a graph,

d(v,u), is the total cost of the shortest path between v

and u.

A path from a source vertex v to a sub-graph Ω is

said to be the shortest path between v and Ω if its total

RobustIrisSegmentationunderUnconstrainedSettings

183

cost is minimum among all v-to-u ∈ Ω paths. The

distance from a node v to a sub-graph Ω, d(v,Ω), is

the total cost of the shortest path between v and Ω:

d(v,Ω) = min

u∈Ω

d(v,u). (2)

A path from a sub-graph Ω

1

to a sub-graph Ω

2

is said to be the shortest path between Ω

1

and Ω

2

if

its total cost is minimum among all v ∈ Ω

1

-to-u ∈ Ω

2

paths. The distance from a sub-graph Ω

1

to a sub-

graph Ω

2

, d(Ω

1

,Ω

2

), is the total cost of the shortest

path between Ω

1

and Ω

2

:

d(Ω

1

,Ω

2

) = min

v∈Ω

1

,u∈Ω

2

d(v,u). (3)

3.3.2 Algorithm for Limbic Contour Detection

Intuitively, the limbic boundary appears as a closed

contour in the image, enclosing the iris centre, and

over pixels with a strong transition in the grey-level

values. Assuming that paths through pixels with high

gradient are preferred over paths through low gradient

pixels, the limbic contour can then be found among

the shortest closed paths enclosing the iris centre can-

didate.

A difﬁculty with searching for the shortest closed

path enclosing a given point C is that small paths, col-

lapsing in the point C, are naturally favoured. We

overcome that difﬁculty by working on polar coordi-

nates. We assume that the origin of the coordinates is

the candidate iris centre.

A circular window centred in each candidate is

transformed to polar coordinates. A closed path in

the original Cartesian coordinates (Figure 5(a)) is

transformed into a path from left to right margins in

the window in polar coordinates, starting and end-

ing in the same row of the transformed window (Fig-

ure 5(b)).

Note that the main assumptions are a) the candi-

date centre lies within the true limbic contour; b) the

limbic contour constitutes a closed path over pixels of

strong gradient. The limbic contour is not necessar-

ily circular and the candidate centre does not need to

match the true iris centre for a correct contour detec-

tion. As long as one centre candidate lies within the

iris region one closed contour around it will be de-

tected, regardless of the distance between the detected

iris centre candidate and the real iris centre.

3.3.3 Computation of the Shortest Closed Path

In spite of the efﬁciency of the computation of the

shortest path between the whole left and right mar-

gins, or between two pre-deﬁned points in the mar-

gins, or between one of the margins and a pre-deﬁned

(a)

(b)

Figure 5: a) Original limbic contour in Cartesian coordi-

nates; b) corresponding left-to-right path in the polar do-

main.

point in the other margin, the search for the short-

est path between the left and right margins with the

constraint that the path should start and end in the

same row seems to increase the complexity of the pro-

cedure. As typical, optimization with constraints is

more difﬁcult than without.

Had one been interested in the simple shortest path

between the left and right margin and the computation

would be very efﬁciently performed using dynamic

programming. Assuming the simplifying assumption

that the vertical paths do not zigzag back and forth, up

and down, in the transformed image, the search may

be restricted among connected paths containing one,

and only one, pixel in each column between the two

end-columns.

Formally, let I be an N

1

×N

2

window (after polar

coordinate transform) with N

1

columns and N

2

rows;

deﬁne an admissible path to be

s = {(x, y(x))}

N

1

x=1

, s.t. ∀x |y(x) −y(x −1)| ≤ 1,

where y is a mapping y : [1,··· , N

1

] → [1, ··· , N

2

].

That is, an admissible path is an 8-connected path of

pixels in the image from left to right, containing one,

and only one, pixel in each column of the image.

The ﬁrst step is to traverse the image from the sec-

ond column to the last column and compute the cumu-

lative minimum cost C for each entry (i, j):

C(i, j) = min

C(i −1, j −1) + w(p

i−1, j−1

; p

i, j

)

C(i −1, j) + w(p

i−1, j

; p

i, j

)

C(i −1, j + 1) + w(p

i−1, j+1

; p

i, j

)

,

where w(p

i, j

; p

l,m

) represents the weight of the edge

incident with pixels at positions (i, j) and (l, m). At

the end of this process,

min

j∈{1,···,N

2

}

C(N

1

, j)

VISAPP2013-InternationalConferenceonComputerVisionTheoryandApplications

184

indicates the end of the minimal connected path.

Hence, in the second step, one backtrack from this

minimum entry on C to ﬁnd the optimal path.

Note that this procedure gives not only the shortest

path between the left and right margins but also yields

the shortest path between any point in the right mar-

gin and the whole left margin: for any point (N

1

, j)

in the right margin, C(N

1

, j) indicates the cost of the

shortest path between (N

1

, j) and the whole left mar-

gin, see Figure 6. Finally, it should be clear how to

change the initial conditions of the above procedure to

yield the shortest path between two pre-deﬁned points

in the opposite margins.

(a) (b)

Figure 6: Example of shortest path starting point detection.

(a) shows all paths from the left margin to the right margin

and (b) all the paths from the right margin to the left mar-

gin. As is easily deductable, at least one closed contour will

result from this process.

Unfortunately, the computation of the shortest

path constrained to start and end in the same row (cor-

responding to closed contours in the original window)

does not seem amenable to such an efﬁcient proce-

dure. The brute force solution of computing the short-

est path between the i-point in the left margin and the

i-point in the right margin, for i = 1···N

2

, and taking

the minimum, is not compatible with requirements of

near real-time in our application.

Noting that if j and ` are two distinct points in

the right margin, then the shortest paths between each

of these points and the whole left margin do not in-

tersect, it is trivial to conclude that there is at least

one point m in the right margin for which the shortest

path between m and the whole left margin starts also

at row m. Note that the paths correspond to closed

paths in the original window in Cartesian coordinates

(not necessarily including the shortest one). Similarly,

interchanging the role of the left and right margin, it

is possible to obtain at least one point n in the left

margin for which the shortest path to the whole right

margin is closed. By computing all the paths from

the left to the right margin (and vice-versa), a set of k

closed contours is obtained for each centre candidate.

The procedure is illustrated in Figure 6.

3.3.4 Design of the Weight Function

The weight of an edge in the graph is a function of the

values of the incident nodes (pixels). We start by com-

puting the derivative in the radial direction (centred in

the iris candidate position) in the original space, us-

ing a 3-point numerical differentiation, as deﬁned in

Eq. (4).

G

θ

(r) =

I(r + h) −I(r −h)

2h

(4)

In the graph, to each edge incident with 4-

neighbouring pixels correspond a weight determined

by the derivative value of the two incident pixels, ex-

pressed as an exponential law, presented in Eq. (5).

f (g) = f

`

+ ( f

h

− f

`

)

exp(β (255 −g)) −1

exp(β 255) −1

(5)

with f

`

= 2, f

h

= 32, β = 0.0208 and g is the min-

imum of the derivative computed on the two incident

pixels. For 8-neighbour pixels the weight was set to

√

2 times that value. The parameter β was experimen-

tally tuned using a grid search method. The remain-

ing parameters were manually optimised in some of

our previous works (Oliveira et al., 2012).

3.4 Best Pair Centre/Contour

From the previously described steps a set of cen-

tre/contour candidate pairs (Cp) is built. An example

of such candidate pairs is depicted in Figure 7, where

the yellow circles represent the centres and the purple

curves the limbic contour candidates.

Figure 7: Example of the centre/contour set of candidates.

The centre candidates are represented by yellow circles, the

detected contours by purple curves and the ground-truth iris

centre by a white cross.

The joint decision for the centre and contour is

taken to maximise the joint probability of the individ-

ual parts. In here, we assume that the joint probability

is a monotonous function of the product of individual

measures of quality, combined in an overall quality

factor, Q. The discrimination between candidates is

performed by choosing the pair with the highest Q.

The quality factor is given by:

Q(C p) =

µ(∆C) ·ρ

p

|1 −S(C)|

(6)

where µ(∆C) is the mean derivative alongside the con-

tour, ρ

p

is the cross-correlation value of the centre

RobustIrisSegmentationunderUnconstrainedSettings

185

candidate, and S is the shape factor of the contour

(with perimeter P and area A), given by:

S(C) =

4π ·A

P

2

(7)

This way the best centre/contour pair, Cp

Q

, is se-

lected based on mutual information from both iris

centre and limbic contour quality.

3.5 Upper Eyelid Approximation

Eyelids represent one of the most common noise fac-

tors on images acquired under unconstrained settings

when compared with images acquired in controlled

environments. Even though the proposed algorithm

presents no shape constraints, dark regions, such as

eyelashes and shadows, generally pose difﬁculties to

the shortest path algorithm. To encompass such dif-

ﬁculties a simple eyelid approximation algorithm is

proposed, based, once again on graph notions. The

algorithm is very similar to the one proposed for lim-

bic contour segmentation. It presents two main differ-

ences:

• The shortest left-to-right path is computed on the

original image in Cartesian coordinates;

• No cost function is designed, and the cost associ-

ated with each edge is now given by the minimum

intensity of each incident pixel.

With such premises the left-to-right shortest path

algorithm, applied to the original iris image, will pref-

erentially stick to low intensity left-to-right curves.

As the eyelashes often accumulate on the upper eye-

lid, creating a low intensity region over the iris, the

shortest path will tend to traverse such region.

The lower eyelid was not detected as the eyelashes

tend to be longer and considerably more dense in the

upper eyelid, than in the lower eyelid.The observed

contrast between iris and the lower eyelid is, thus,

enough so as not to mislead the proposed algorithm.

An example of both successful and unsuccessful eye-

lid localisation is depicted in Figures 8(a) and 8(b)

respectively.

4 RESULTS

4.1 Tested Dataset

The proposed algorithm was tested on the UBIRIS.v2

iris image database (Proenc¸a et al., 2010). Images

in UBIRIS.v2 were captured under non-constrained

conditions (at-a-distance, on-the-move and on the vis-

ible wavelength), with corresponding realistic noise

(a) (b)

Figure 8: Example of: a) successful eyelid localisation and

b) unsuccessful eyelid localisation.

(a) (b) (c) (d)

(e) (f) (g) (h)

Figure 9: Examples image classes in the UBIRIS.v2

database: a) Heavily occluded; b) Heavily pigmented; c)

Glasses occlusion; d) Reﬂection occlusion; e) Off-angle; f)

Rotated eye; g) Black subjects and h) Normal.

factors. Figure 9 depicts some examples of these

noise factors (reﬂections, occlusions, pigmentation,

etc.). A subset of the original database, composed by

802 images from 36 distinct individuals was created.

All images and individuals were randomly selected,

so as to better encompass the widest possible range

of noise factors. All images from the created subset

were manually annotated for the limbic and pupillary

contour, as well as for the geometric center of the iris

region.

4.2 Iris Centre Candidate Detection

The accuracy of the centre candidate detection step

was analysed by computing the distance between the

manually annotated iris centre and each of the N cen-

tre candidates. In the proposed work we use N = 4

as this value guaranteed that at least one candidate

lied inside the iris/pupil region, for every image in the

tested dataset. The Euclidean distance between each

center candidate and the manually annotated ground-

truth centre was computed. The iris centre detection

accuracy for a particular image corresponded to the

minimum of these distances.

A mean distance of 6.29 ± 5.71 pixels was ob-

tained for the tested dataset. Considering that the

mean iris radius of the tested dataset was 58.71 ±

17.45 pixels this result might seem not that promis-

ing. The observed deviations of the center candidates

VISAPP2013-InternationalConferenceonComputerVisionTheoryandApplications

186

from the real iris center arise mainly from two causes:

a) the partial occlusion of the iris by the eyelids results

in a deviation from an ideal circular shape and b) the

extent to which specular reﬂections contaminate the

iris region causes the gradient ﬂow to diverge towards

those regions instead of the sclera.

However, given how the limbic contour detection

algorithm is designed, there is no need to achieve per-

fect accuracy on the real iris centre with any of the

detected candidates. As long as ones of the candi-

dates lies inside the iris/pupil region, the detection of

a closed contour around it (not necesseraly centred on

it) is guaranteed.

4.3 Best Centre/Contour Pair

Discrimination

The discriminative performance of the proposed qual-

ity factor, Q(C p), was analysed by computing the

misdetection ratio, M

r

. This value corresponds to

the ratio between the number of images where the

best centre/contour pair was not correctly discrim-

nated and the total number of tested images. To prove

that mutual context information improves results ob-

tained by singular sources of information, the M

r

val-

ues for each Q(Cp) parameter were tested individu-

ally:

• The centre/contour pair with maximum ρ

p

value

• The centre/contour pair with maximum ∆C value

• The center/contour pair with a S(C) value closest

to 1

The M

r

values for each individual parameter and

for the quality factor are summarized in Table 1. As

it can be seen the quality factor overperforms every

singular parameter by a considerable margin, pre-

senting a 2.12% value of M

r

. Both gradient and

cross-correlation based discriminations presented in-

termidiate results, demonstrating limited discrimina-

tory capacity when compared to the mutual context

results of the quality factor. Circularity (|1 −S(C)|)

presents, by far, the worst individual discriminative

performance. This observation may lead to one of

several conclusions. Either circularity is not a good

parameter to be used in the scope of mutual con-

text information, or its effect is only observable when

combined to other sources of knowledge. As no at-

tempt was made of testing combinations of two of the

three suggested parameters, the true relevance of cir-

cularity, as far as discrimination is concerned, cannot

be fully asserted.

Table 1: Misdetection ratios observed when the discrimina-

tion is performed with each individual parameter and with

the proposed quality factor.

Parameter M

r

ρ

p

0.120

∆C 0.0860

|1 −S(c)| 0.629

Q 0.0212

4.4 Limbic Contour Segmentation

Errors

To evaluate the segmentation accuracy of the previo-

suly discriminated best limbic contour candidates a

series of metrics were computed. All these metrics,

listed below, were computed for the initial contour

and for the contour after eyelid detection, so as to as-

sert the advantages of this last process. Table 2 sum-

marizes the most relevant results:

• Mean, median and maximum (Hausdorff) dis-

tance, in pixels, between the detected limbic con-

tour and the manually annotated ground-truth

• E

1

and E

2

errors, as presented in the NICE.I con-

test (http://nice1.di.ubi.pt/)

• Mean percentage of false iris (FIR) and false non-

iris (FNIR) segmented pixels

The ﬁrst three measurements refer to point-to-

point distances between the two referred contours.The

histogram of errors and the corresponding box plots

are depicted in Figures 10(a) and 10(b), respectively.

The information presented in the histogram shows

that, besides the percentage of images where the qual-

ity factor failed the discrimination of the best cen-

tre/contour pair (and thus the largest distances were

observed), the segmentation errors are relatively low.

The effect of eyelid detection is evident in both re-

sults. The histogram of errors after eyelid detection

reveals an increased concentration of errors towards

lower values. This observation is also supported by

the observation of the boxplot results. Lower mean

and standard deviation values further corroborate the

signiﬁcant improvement introduced by eyelid detec-

tion. The observed inﬂuence of the upper eyelid on

the segmentation results shows that its detection is a

key step of the proposed algorithm. As the eyelashes

often present an higher contrast with the skin than

the iris with the eyelashes, it is only safe to assume

that a gradient weighted shortest path algorithm will

tend to prefer the eyelash-skin boundary to the iris-

eyelash boundary. Eyelid detection compensates for

this fact and results in a signiﬁcant improvement in

all the tested metrics.

RobustIrisSegmentationunderUnconstrainedSettings

187

(a) (b)

Figure 10: Distribution of the segmentation errors in the tested dataset: a) Histogram of errors (in pixels) and b) boxplots of

the error distributions. All the results are presented before and after eyelid detection.

Table 2: Summary of the most relevant segmentation quality measurements before and after eyelid localisation.

Mean Median Hausdorff E

1

E

2

FIR FNIR

Before eyelid detection 7.11 ±5.11 4.96 19.47 0.0200 0.0923 0.1814 0.00325

After eyelid detection 4.86 ±2.96 4.18 12.50 0.109 0.0374 0.0690 0.00583

Pixels [0 −1]

In 2008 Hugo Proenca and Luis Alexandre, from

Universidade of Beira Interior (UBI), Portugal, pro-

moted the NICE.I Contest (http://nice1.di.ubi.pt/).

This contest aimed to “evaluate the robustness to

noise of iris segmentation and noise detection algo-

rithms, toward iris recognition systems within less

constrained image capturing conditions, eventually to

covert ones, in the near future”. The NICE results

represent the great majority of the already available

segmentation results using the UBIRISv2 database.

However the evaluation parameters of the aforemen-

tioned contest are based on two principles that signif-

icantly vary from our proposed approach:

1. The segmentation of the iris region of the eye was

based both on the detection of the limbic and the

pupillary contours. In our work we performed no

segmentation of the pupillary contour, as we argue

that performing recognition regardless of this step

might prove as the path forward, as far as uncon-

strained iris recognition is concerned. The ratio-

nale behind such decision is based on the fact that

the contrast between the pupil and the iris is ex-

tremely dependent on a set of hardly controlable

factors (illumination, iris pigmentation, obstruc-

tions, etc.), thus creating a serious challenge as

far as the development of robust segmentation al-

gorithms is concerned.

2. The ﬁnal segmentation results are evaluated as

number of pixels correctly classiﬁed as iris. This

description takes in consideration the detection of

noisy areas (reﬂections or eyelashes for example)

which surpasses the scope of the proposed work.

With these two points in mind it is obvious that a

direct comparison with the NICE.I segmentation re-

sults is not possible. However the two metrics sug-

gested for the evaluation of iris segmentation in the

contest were adapted for the evaluation of the pro-

posed algorithm.

The mean E

1

and E

2

errors for the tested dataset

of images are presented in Table 2. The effect of

eyelid detection was already ascertained through the

analysis of the point-to-point results, but it is of rele-

vance to note that the NICE.I metrics corroborate the

previous conclusions.

The obtained E

1

error is lower than all the re-

ported errors in the NICE.I contest (summarized in

Table 3). However such a direct comparison will only

be possible when noise detection and puppilary es-

timation are incorporated in the present algorithm.

Nevertheless, these preliminary results seem to indi-

cate some promise regarding the chosen approach.

The obtained E

2

value leads to some interesting

conclusions. The value presented in Table 3 is the

result of a mean FIR of 0.069 and a mean FNIR of

0.0058. A higher FIR value was to be expected as, in

most cases, the number of iris pixels in the UBIRIS.v2

images is considerably smaller than the number of

non-iris pixels. A 0.0058 FNIR is an excellent in-

VISAPP2013-InternationalConferenceonComputerVisionTheoryandApplications

188

(a) (b)

(c) (d)

Figure 11: Segmentation examples on images affected by

several noise factors. The red points corresponding to the

false iris pixels and the green ones to the false non-iris.

Table 3: Comparative analysis between some recent works,

including the top performing algorithms in the NICE.I con-

test, and the proposed methodology.

Author E

1

E

2

(Tan et al., 2010) 0.0131 –

(Sankowski et al., 2010) 0.0160 0.0600

(Almeida, 2010) 0.0180 –

(Tan and Kumar, 2012) 0.0190 –

Proposed 0.0109 0.0374

dicator that very few iris pixels are classiﬁed as non-

iris. This means that almost no useful information

for recognition is lost during the segmentation pro-

cess. The mean FIR value, however, indicates a still

considerable number of noisy pixels that need to be

pruned so as to not present misleading information to

the recognition module. Figure 11 depicts some ex-

amples of segmentation results in images affected by

some of the aforementioned noise factors. One can

easily observe that the great majority of the pixels are

correctly classiﬁed.

5 CONCLUSIONS

The use of mutual information from gradient orienta-

tion for centre detection and gradient magnitude for

contour detection presented good results for future

works. Using the extracted iris regions as inputs for

a feature extraction and matching module is the obvi-

ous step to carry on after the segmentation algorithm.

However some improvements can be easily suggested

to the proposed algorithm:

• Improve Best Centre/Contour Pair Discrimina-

tion: the current discrimination based on the qual-

ity factor is not the most robust measurement.

Training classiﬁers using the ρ

p

, ∆(C) and S(C)

values obtained in the tested dataset will generate

a far more reliable discrimination module.

• Noise Detection: as previously referred the ob-

tained results are only promising to a certain ex-

tent. The absence of noise estimation is not ac-

ceptable for integration with a recognition mod-

ule. The number of points that could produce mis-

leading results needs to be signiﬁcantly reduced in

future works.

• Quality Assessment: one question that may be

posed when working with images acquired under

less constrained conditions is if enough informa-

tion is available so as to allow recognition. A

quality assessment module to quantify the amount

of textural information, occlusion and focus of in-

dividual iris images is and important prerequisite

for the application of the proposed algorithm in a

functional iris recognition systems.

• Pupil Probability Estimation: In this work we

did not address the pupil segmentation because

of the inherent difﬁculties presented by the cho-

sen database. We argue that a recognition al-

gorithm with no need of pupillary segmentation

is probably the way forwarded in unconstrained

acquisition settings. However, the same prob-

lem that concerns noise detection is applicable to

pupil localisation: if the pixels corresponding to

this region are not removed from the segmented

iris mask, misleading information will be intro-

duced in the recognition module, resulting in loss

of accuracy. As accurate segmentation is ren-

dered difﬁcult by the intrinsic characteristics of

the UBIRIS.v2 images, estimating a probability

of each pixel belonging to the pupil seems a more

robust way of approaching the problem. Future

works will certainly focus on these three points of

interest.

REFERENCES

Almeida, P. (2010). A knowledge-based approach to the iris

segmentation problem. Image and Vision Computing,

28(2):238–245.

Chen, R., Lin, X., and Ding, T. (2011). Iris segmentation

for non-cooperative recognition systems. Image Pro-

cessing, 5(5):448 –456.

Chen, Y., Adjouadi, M., Han, C., Wang, J., Barreto, A.,

Rishe, N., and Andrian, J. (2010). A highly accu-

rate and computationally efﬁcient approach for uncon-

strained iris segmentation. Image and Vision Comput-

ing, 28(2):261 – 269.

RobustIrisSegmentationunderUnconstrainedSettings

189

Daugman, J. (1993). High conﬁdence visual recognition of

persons by a test of statistical independence. IEEE

Transactions on Pattern Analysis and Machine Intel-

ligence, 15(11):1148 –1161.

Daugman, J. (2006). Probing the uniqueness and random-

ness of iriscodes: Results from 200 billion iris pair

comparisons. Proceedings of the IEEE, 94(11):1927–

1935.

Daugman, J. (2007). New methods in iris recognition. IEEE

Transactions on Systems, Man, and Cybernetics, Part

B: Cybernetics,, 37(5):1167 –1175.

He, Z., Tan, T., Sun, Z., and Qiu, X. (2009). Toward ac-

curate and fast iris segmentation for iris biometrics.

IEEE Transactions on Pattern Analysis and Machine

Intelligence, 31(9):1670 –1684.

Jain, A., Hong, L., and Pankanti, S. (2000). Biometric iden-

tiﬁcation. Communications of the ACM, 43(2):90–98.

Kobatake, H. and Hashimoto, S. (1999). Convergence index

ﬁlter for vector ﬁelds. IEEE Transactions on Image

Processing, 8(8):1029–1038.

Ma, L., Tan, T., Wang, Y., and Zhang, D. (2004). Local in-

tensity variation analysis for iris recognition. Pattern

Recognition, 37(6):1287 – 1298.

Masek, L. (2003). Recognition of Human Iris Patterns for

Biometric Identiﬁcation. Towards Non-cooperative

Biometric Iris Recognition. PhD thesis.

Nabti, M. and Bouridane, A. (2008). An effective and fast

iris recognition system based on a combined multi-

scale feature extraction technique. Pattern Recogni-

tion, 43(3):868 – 879.

Oliveira, H., Cardoso, J., Magalhaes, A., and Cardoso, M.

(2012). Simultaneous detection of prominent points

on breast cancer conservative treatment images. In

Proceedings of the 19th IEEE International Confer-

ence on Image Processing, pages 2841–2844.

Pawar, M., Lokande, S., and Bapat, V. (2012). Iris segmen-

tation using geodesic active contour for improved tex-

ture extraction in recognition. International Journal

of Computer Applications, 47(16):448–456.

Proenc¸a, H., Filipe, S., Santos, R., Oliveira, J., and Alexan-

dre, L. A. (2010). The ubiris.v2: A database of visi-

ble wavelength iris images captured on-the-move and

at-a-distance. IEEE Transactions on Pattern Analysis

and Machine Intelligence, 32(8):1529 –1535.

Radman, A., Jumari, K., and Zainal, N. (2012). Iris seg-

mentation in visible wavelength environment. Proce-

dia Engineering, 41:743–748.

Ross, A. (2010). Iris recognition: The path forward. Com-

puter, 43(2):30–35.

Sanchez-Avila, C., Sanchez-Reillo, R., and de Martin-

Roche, D. (2002). Iris-based biometric recognition

using dyadic wavelet transform. Aerospace and Elec-

tronic Systems Magazine, IEEE, 17(10):3 – 6.

Sankowski, W., Grabowski, K., Napieralska, M., Zubert,

M., and Napieralski, A. (2010). Reliable algorithm

for iris segmentation in eye image. Image and Vision

Computing, 28(2):231–237.

Shah, S. and Ross, A. (2009). Iris segmentation using

geodesic active contours. IEEE Transactions on In-

formation Forensics and Security,, 4(4):824 –836.

Tan, C. and Kumar, A. (2012). Uniﬁed framework for

automated iris segmentation using distantly acquired

face images. IEEE Transactions on Image Process-

ing, 21(9):4068–4079.

Tan, T., He, Z., and Sun, Z. (2010). Efﬁcient and robust seg-

mentation of noisy iris images for non-cooperative iris

recognition. Image and Vision Computing, 28(2):223

– 230.

Vatsa, M., Singh, R., and Noore, A. (2008). Improving iris

recognition performance using segmentation, quality

enhancement, match score fusion, and indexing. IEEE

Transactions on Systems, Man, and Cybernetics, Part

B: Cybernetics,, 38(4):1021 –1035.

Wildes, R. (1997). Iris recognition: an emerging biometric

technology. Proceedings of the IEEE, 85(9):1348 –

1363.

Zuo, J. and Schmid, N. (2010). On a methodology for robust

segmentation of nonideal iris images. IEEE Transac-

tions on Systems, Man, and Cybernetics, Part B: Cy-

bernetics,, 40(3):703 –718.

VISAPP2013-InternationalConferenceonComputerVisionTheoryandApplications

190