Table 2: Results of Feature Selection and Normalization for ISL database.
Feature Vector SVM MLP NB BN k-NN LMT J48
ZM(121) 89.4 89.5 76.5 74.7 81.1 87.7 79
Combined(135) 92.7 92.7 80.7 82.8 84 91.2 79.3
CFS (56) 93.4 93 89 87.3 86.8 92.1 80
Info Gain+ CFS (56) 96.3 96.7 97.8 96 89.7 96.3 86.3
performs better than MLP and LMT. For other classi-
fiers accuracy remains less than 85%. In order to min-
imize the feature vector size, the standard CFS tech-
nique is applied to the combined set of 135 features.
A reduced feature vector of size 56 is listed in Table
3. Among 56 CFS based selected features, there are
45 ZMs, 4 Hu Moments, and all Geometric features.
Focusing on the ZM order, it is observed that 71%
of selected feature vector are the lower order ZMs.
Therefore, these results are in line with the general-
ized conclusions drawn while analyzing the behavior
of Zernike Polynomial plots in section 2.3.2. It is
worth noting that for selected feature set the perfor-
mance of NB and BN improves significantly. Minor
improvement is also observed in all other classifiers.
Since the feature values have large variations. There-
fore, normalization is done to bring the range of fea-
tures within 0 and 1.For the normalized feature vec-
tor, the highest rise in accuracy is observed for NB,
BN, and J48. Some improvement is also observed for
SVM, MLP, k-NN, and LMT.
3.2 Triesch’s Dataset
The proposed systems performance is also studied on
standard Treisch’s dataset. It has 12 signs of 20 sign-
ers captured in different backgrounds. The results are
shown in Table 4. Three sets of Treisch‘s database are
made. Set 1 includes images of all the signs in both
uniform and complex background, Set 2 includes im-
ages of all the signs in uniform background only and
Set 3 includes images of only 6 distinctive signs in
uniform background. SVM gives higher accuracy in
all the three cases. For uniform background and 6 dis-
tinctive signs, accuracy is 93.2%. It drops to 82.5%
when all 12 signs are taken. The reason for this may
be the large similarity among single hand signs. Ac-
curacy dropped to 61.5% when images with compli-
cated background are also included.
4 CONCLUSION
For ISL database, the accuracy of combined feature
set, CFS based feature vector and normalized feature
vector is compared in Table 2. For combined feature
vector, MLP and SVM give the highest accuracy. It
is worth noting that for selected feature set the perfor-
mance of NB and BN improves significantly. There-
fore, following specific conclusions are drawn:
• Among individual feature sets, ZMs are better
than HM and GF. However, combining GF and
HM enhances the performance of ZM.
• For higher orders of ZM, the feature vector size
increases considerably while a significant im-
provement in accuracy is not achieved. Particu-
larly in the case of Naive Bayes and Bayes Net it
decreases due to the considerable increase in fea-
ture vector size.Therefore, a reduced feature set
is obtained using Correlation based Feature Se-
lection. In the reduced feature vector, 71% lower
order ZMs. The value of iteration, v <= 10 are
selected.
• For combined feature vector, SVM, Logistic
Model Tree, and MLP show similar results and are
better than other classifiers. The Logistic Model
Tree performs at par with MLP and SVM for com-
bined feature set. Therefore, it can be concluded
that Logistic Model Tree, MLP and SVM are ca-
pable of handling the larger feature vector.
• For CFS based reduced feature set the perfor-
mance of NB and BN improves while minor im-
provement is also observed in other classifiers.
SVM performed best for reduced feature set. Fur-
ther using InfoGain based feature weighing, the
performance is enhanced. The major improve-
ment is observed in the case of classifiers like NB,
BN, k-NN, and J48, as these classifiers do not use
an inherent feature normalization process. In case
of normalized feature vector, NB outperformed
SVM, giving the highest accuracy of 97.8%.
• In the proposed feature vector also contains some
higher order lower order ZMs and the value of it-
eration, v <= 10 are selected. Therefore, going
up to higher order and selecting the lower iteration
value has resulted in an optimal feature vector.
• For Treisch‘s dataset, SVM gave a good accuracy
for uniform background only.
The optimal shape-based feature set proposed in this
paper shall further be integrated into a dynamic ISL
recognition system. The proposed feature vector can
be utilized directly for representing the hand gestures
ICPRAM 2017 - 6th International Conference on Pattern Recognition Applications and Methods
546