Table 1: Comparison of Registration Algorithms Algo-
rithm1 (Lajevardi et al., 2013) and Algorithm 2, proposed
in this paper.
Algorithm 1 Algorithm 2
Best ε 6 6
Average time (sec) 15.99 25.94
EER (as %) 2 0
5 EXPERIMENT
The PUT database has for left and right hands re-
spectively, 600 palm vein images from 50 individuals,
giving 6600 genuine comparisons across all sessions,
1800 genuine comparisons within same sessions and
352800 imposter comparisons. To tune the param-
eters in the experiment, we chose a random sample
of 50 genuine comparisons and 50 imposter compar-
isons from all possible genuine and imposter compar-
isons. This set of 100 comparisons is called the train-
ing set and will be consistently used throughout all
the experiments described in this paper. The BGM
algorithm has two parameters to be tuned, the toler-
ance ε in the registration algorithm and the insertion
and deletion cost α in the graph matching algorithm.
The first experiment was to compare the registration
algorithm by Lajevardi et al (Lajevardi et al., 2013)
with the improved algorithm, Algorithm 2, presented
in this paper. The graph pairs in the testing set were
aligned first using Algorithm 1 and then Algorithm
2 and in each case, the number of vertex pairs that
lay within a tolerance ε of each other were counted.
A distance measure based on the Similarity score S
n
(Section 4, Score 1), given by d
min
= 1 −S
n
is com-
puted from the number of common vertex pairs and
the sizes of the two graphs compared. The d
min
val-
ues between genuine and imposter comparisons in the
training set are used to define score distributions to
compare the alignment performance of the two algo-
rithms. This experiment is run over range of ε values
to find the ε value that gave the lowest EER for each
algorithm. The Equal Error Rates (EER) at the best ε
value for each algorithm and the average registration
times are presented in Table 1.
Setting ε = 6 and using Algorithm 2 for reg-
istration, the BGM algorithm was run on all the
comparisons from the palm vein database, excluding
the comparisons used in the training set. BGM was
run using a range of α values to determine the pa-
rameters that best separated the genuine and imposter
scores. It was found that α = 11 best compensated
for the variations within samples of the same hand
in determining the graph edit path when comparing
pairs of graphs.
The BGM algorithm was run on Left and Right hands
were tested separately. There were two distinct types
of experiments based on the type of genuine com-
parisons made. First, Across Session Genuines using
all the genuine comparisons including those across
sessions and the second Within Session Genuines,
with genuine comparisons taken only from within
the same session. In both types of experiments,
distances between a pair of graphs was measured
using each of the 10 topological measures listed in
Section 4. Table 2 shows the EER based on the the
10 topological features for across session and within
session experiments for left and right hands. The
Table 2 shows that S
e
does the best job of separating
the genuine comparisons from the imposters.
The next step was to determine if any combina-
tion of similarity scores could improve the perfor-
mance compared to using a single similarity measure.
To do this, first the pairwise Spearman’s correlation
coefficient between the 10 features was calculated.
We found that most of the topological features were
strongly correlated, with correlation coefficients be-
tween 0.8 to 0.95. In fact only S
ρ
c1
showed moder-
ate correlations ranging between 0.58 and 0.78 with
the other features. Nevertheless, in the absence of
perfect correlation between features, there is a poten-
tial that pairwise combination of S
e
with one of the
other features could produce better matching results.
To test this hypothesis, S
e
was combined with every
other feature to give 9 different pairings of topolog-
ical features. For every pair of features, the follow-
ing experiment was done. A Support Vector Machine
(SVM) was used to build a classifier with a radial ba-
sis function (RBF) kernel that was tuned on the score
pairs from the training set to determine the best pa-
rameters for the RBF kernel. The remaining compar-
isons on the database were divided into 10 parts and a
10 fold test was conducted where the SVM classifier
was trained on 9 parts of the data and tested on the
one other part. The false match rate (FMR), false non
match rate (FNMR) and total misclassification error
(TE) were computed in every fold. The average over
10 folds was taken as the matching performance us-
ing the chosen pair of features. The results for the 9
pairings are shown in the Table 4.
6 RESULTS AND DISCUSSION
Table 1 shows that Algorithm 2 significantly improves
the registration process over Algorithm1 by Lajevardi
et al (Lajevardi et al., 2013), evidenced by the 0%
ICISSP2015-1stInternationalConferenceonInformationSystemsSecurityandPrivacy
300