features and DL. In either case the algorithm works to
reject it.
Even if the forger writes the signature within a
given area with expected length and time, it is really
difficult to write the signature with tolerable MS for
the forger, even for the real user, s/he cannot write the
signature with same MS as before, but s/he can write
his/her signature within the tolerable interval of MS.
Whereas forger can’t do it and our algorithm can eas-
ily recognize his speed and reject it.
CXY measures the scatter value of all points in
a signature that are distributed on the device screen.
So, even if the forger writes a signature matching
PL, TL, MS, it is unlikely to match the distribution
of the points with the genuine signatures. So, when-
ever his/her signature does not match with CXY then
our algorithm detects that it is a forgery.
A signature may have multiple strokes and each
stroke has its own features (PL, TL, DL, MS, CXY and
STVLR ) because it is just a subsequence of the sig-
nature sequence. The features of each stroke are dif-
ferent from each other. So, our algorithm calculates
all the features of each stroke and then finds out its
ratio to the whole signature. So, even if the forger is
able to write a signature which successfully passes all
the global features test, still, if it does not passes the
stroke ratio verification process then the signature is
rejected.
Finally, if the forgery passes all the global and
stroke feature tests, then, the signature undergoes
DTW testing. DTW compares the similarity between
two sequences. We find out two distance vectors:
d
g
represents the “distance” among the three gen-
uine signatures, while
d
v
represents the “distance”
among three genuine with claimed signature. If
d
v
does not lie in the interval which is calculated on the
basis of
d
g
by certain threshold at the template gen-
eration phase, then it is rejected as a forgery.
4 EXPERIMENTATION
In this section we present experimental results con-
cerning identity verification with our system. The ac-
curacy of a recognition algorithm is generally mea-
sured in terms of two potential types of errors: false
negatives (fn) and false positives (fp). fp are cases
where a claimed identity is accepted, but it should not
be, while fn are cases where a claimed identity is not
accepted, while it should be. The frequency at which
false acceptance errors occur is denoted as False Ac-
ceptance Rate (FAR), while the frequency at which
false rejection errors occur is denoted as False Rejec-
tion Rate (FRR). Two metrics building on true/false
positives/negatives (tp,fp,tn,fn) are widely adopted:
precision and recall. Recall (tp/(t p+ f n)) is the prob-
ability that a valid identity is accepted by the system
(i.e., true positive rate) while precision (tp/(t p+ f p))
is the probability that a claimed identity which is ac-
cepted by the system is valid. F-measure (which is
the harmonic mean of precision and recall) combines
both metrics into a global measure ( f -measure = (2×
prec ×recall)/(prec + recall)).
A threshold on the similarity score must be identi-
fied for determining whether two signatures are sim-
ilar (accept the identity) or significantly different (re-
ject the identity). The higher the threshold, the higher
the precision (i.e., the lower the risk of accepting in-
valid identities). However, a high threshold also de-
creases the recall of the system (i.e., the higher the
risk to reject valid identities).
The performance of the proposed scheme has
been assessed in terms of false positives, false neg-
atives, precision, recall and f-measure on three differ-
ent datasets: on the SigComp2011 Dutch and Chinese
datasets (Liwicki et al., 2011); on the SigComp2013
Japanese dataset (Malik et al., 2013).
We start by describing the experimental set-up.
Several mobile devices have been involved in our ex-
periments (i.e., Google Nexus 5, GalaxyS2, XperiaZ2
and ZTE Blade A430), along with several standard
datasets. The specification of the datasets involved
are as follows:
• The SigComp2011 (Liwicki et al., 2011) compe-
tition involved (online) dutch and chinese data.
The purpose of using these two data sets was to
evaluate the validity of the participating systems
on both Western and Chinese signatures. Signa-
ture data were acquired using a WACOM Intuos3
A3 Wide USB Pen Tablet and collection software,
i.e., MovAlyzer.
– Dutch Dataset. The dataset is divided in
two non-overlapping parts, a training set (com-
prised of 10 authors with 330 genuine signa-
tures and 119 forgeries) and a test set (com-
prised of 10 authors with 648 genuine signa-
tures and 611 corresponding forgeries).
– Chinese Dataset. The dataset is divided in
two non-overlapping parts, a training set (com-
prised of 10 authors with 230 genuine signa-
tures and 430 forgeries) and a test set (com-
prised of 10 authors with 120 genuine signa-
tures and 461 corresponding forgeries).
• The SigComp2013 (Malik et al., 2013) compe-
tition involved (online) data collected by PRre-
searchers at the Human Interface Laboratory, Mie
University Japan.
ICISSP 2016 - 2nd International Conference on Information Systems Security and Privacy
50