the vertical projections, plus the number of holes:
33 characteristics.
c. In this data set each character is represented by its
main four Kirsch compass filters, horizontal, ver-
tical, and both diagonals. The results are down
sampled to a 4 × 4 image, and the original im-
age is down samples as well. This gives 5× 16 =
80 characteristics, plus the additional one for the
number of holes: 81 characteristics.
d. We use a overlapped down sample of the original
image. Each 4 × 4 block of pixels gives a value,
then this mask is moved two pixels right, and so
till the right side, then the mask is lower two pixels
and the process repeated from the left. We obtain
49 attributes, plus the number of holes: 50 char-
acteristics.
Regarding the classifiers, we tried the following:
1. functions.MultilayerPerceptron, with parameters:
GUI=false, autoBuild=true, Debug=false, De-
cay=false, HidenLayers=(number of attributes +
number of classes)/2, LearningRate=0.3, momen-
tum=0.2, nominalToBinaryFilter=true, normal-
izeAtributes=true, normalizeNumericClass=true,
RandomSeed=0, trainingTime=500, validation-
SetSize=0 and validationThreshold=20. Neu-
ral Networks has been used as classifiers in
(Draghici, 1997; Nijhuis et al., 1995)
2. bayes.NaiveBayesUpdateable, with options: de-
bug=false, useKernelStimator=false and useSu-
pervisedDiscretization=false.
3. functions.SMO, Platt’s sequential minimal opti-
mization algorithms for training Support Vector
Machines (Platt, 1999; Zheng and He, 2006;
Chengwen et al., 2006), with options: buildLogis-
ticModels=false, C=1.0, checksTurnedOff=false,
Epsilon=1.0E-12, filterType=Normalize training
data, Kernel=PolyKernel, RamdomSeed=1 and
toleranceParameter=0.0010.
4. lazy.IBk, with no distance weigthing and op-
tions: KNN=1, crossValidate=false, Debug=false,
MeanSquared=false, NearestNeighbourSearchAl-
goritm=LinearNN, WindowsSize=0.
5. meta.AdaBoostM1, with options: classifier=J48,
Debug=false, NumIterations=10, useResam-
pling=false, WeightThreshold=100.
6. meta.Bagging, with options: classifier=J48,
Debug=false, NumIterations=10, bagSizePer-
cent=100, calcOutOfBag=false, Seed=1.
7. trees.J48, with options: binarySplits=false, Confi-
denceFactor=0.25, debug=false, MinNumObj=2,
numFolds=3, reducedErrorPruning=false,
Table 2: Results comparison
a. b. c. d.
1 99.10 97.88 98.15 99.03
2 96.42 89.44 89.86 93.65
3 99.19 96.85 98.01 99.09
4 97.86 96.38 96.45 97.74
5 97.56 95.39 96.23 96.81
6 95.46 92.92 93.87 95.94
7 94.04 89.17 91.04 93.66
SavaInstanceData=false, Seed=1, subtreeRais-
ing=true, Umpruned=false, UseLaplaze=false.
The results of the experiments are shown in Ta-
ble 2. The underline values indicate a significant
worse accuracy compares with the multi layer percep-
tron that has been used as the base of the comparison.
We can see that all the classifiers have an accuracy
higher than 90% for all the data sets, what shows the
benefits of all the preprocess and normalization work.
Note, thought, that the best results are obtained when
the character is used without any characteristic extrac-
tion (a). The worse set of characteristics are the hori-
zontal and vertical projections (b).
Regarding the classifiers, as we could expect, Ad-
aBoost and Bagging improve the performance of the
J48 that they are using as base learner, being Ad-
aBoost better than Bagging. The best classifiers is
the SMO using directly the binary matrix that repre-
sent the character. However, finally we decide to use
the multi layer perceptron, it is only a bit slower but
is faster and its memory requirements are more than
three times lower.
5 CONCLUSIONS
We have described an artificial vision system used to
recognize the Spanish cars license plate numbers in
raster images. We combine the use classic image pro-
cessing techniques with some new ideas, such as the
soften filter and the character segmentation method.
In the study of classifiers we confirmed the ben-
efits of the preprocessing stages achieving accuracies
above 90% for different sets of characteristics. For
two of the classifiers we increase the accuracy to 99%.
In future version, we expect to take into account
the two row Spanish license plates, and the special
license plates with different combination of back-
ground and foreground colors.
As well, as one of the reviewers suggested, we
want to prepare a more detailed study of the use of
ICINCO 2008 - International Conference on Informatics in Control, Automation and Robotics
272