the right classification in attacks and client accesses.
The paper is organized as follows: in section 2 a brief
description of the authentication system is presented,
specially the feature points extraction and the match-
ing stages. Section 3 deals with the analysis of sev-
eral similarity metrics applied to this system. Section
4 shows the effectiveness results obtained by the met-
rics running a test images set. Finally, section 5 pro-
vides some discussion and conclusions.
2 AUTHENTICATION SYSTEM
PROCESS
As previously commented, retinal vessel tree is a
good biometric trait for authentication. To obtain a
good representation of the tree, the creases of the im-
age are extracted. As vessels can be thought of as
ridges seeing the retinal image as a landscape, creases
image will consist in the vessels skeleton (Figure 1(a)
and 1(b)).
Using the whole creases image as biometric pat-
tern has a major problem in the codification and stor-
age of the pattern, as we need to store the whole im-
age. To solve this, similarly to the fingerprint minu-
tiae (ridges, endings, bifurcations in the fingerprints),
a set of landmarks is extracted as the biometric pattern
from the creases image. The most identifiable and in-
variant landmarks in retinal vessel tree are crossovers
and bifurcation points and, therefore, they are used as
biometric pattern in this work.
To detect feature points, creases are tracked to la-
bel all of them as segments in the vessel tree, mark-
ing their endpoints. Next, bifurcations and endpoints
are extracted by means of relationships between seg-
ments. These relationships are found detecting seg-
ments close to each other and calculating their direc-
tions. If a segment endpoint is close to another seg-
ment and forming an angle smaller than
π
2
, a bifurca-
tion or crossover is detected. Figure 1(c) shows the
result obtained after this stage.
Once the biometric pattern for an individual, β, is
obtained as a set of points, it has to be compared with
the stored reference pattern, α, to validate the identity
of the individual. Due to the eye movement during the
image acquisition stage, it is necessary to align β with
α in order to be matched. They may also have differ-
ent cardinality. Considering the reduced range of eye
movements during the acquisition, a Similarity Trans-
form schema (ST) is used to model pattern transfor-
mations (N. Ryan and de Chazal, 2004). A search in
the transformation space is performed to find the more
suitable parameters of the alignment. Once both pat-
terns are aligned, a point p from α and a point p
′
from
(a) (b)
(c)
Figure 1: (a) original image (b) creases image (c) creases
image with the feature points extracted from it.
β match if distance(p, p
′
) < D
max
, where D
max
is a
threshold introduced in order to consider the discon-
tinuities during the creases extraction process leading
to mislocation of feature points. This way, the num-
ber of matched points between patterns is calculated.
Next, similarity metrics are established to obtain a fi-
nal criterion of comparison between patterns.
3 SIMILARITY METRICS
ANALYSIS
The main goal is to define similarity measures on the
aligned patterns to correctly classify authentications
in both classes: attacks (unauthorizedaccesses), when
the two matched patterns are from different individu-
als and clients (authorized accesses) when both pat-
terns belong to the same person.
For the metric analysis a set of 150 images (100
images, 2 images per individual and 50 different im-
ages more) from VARIA database (VARIA, ) were se-
lected. These images have a high variability in con-
trast and illumination allowing the system to be tested
in quite hard conditions. In order to build the training
set of matchings, all images are matched versus all the
images (a total of 150x150 matchings). The match-
ings are classified into attacks or clients accesses de-
pending if the images belong to the same individual
or not. Separation of both classes by some metric de-
termines its classification capabilities.
The main information to measure similarity be-
tween two patterns is the number of feature points
successfully matched between them. Figure 2, shows
histogram of matched points for both classes of au-
thentications in the training set. As it can be observed,
BIOSIGNALS 2009 - International Conference on Bio-inspired Systems and Signal Processing
250