
Table 2: Hu moments invariants for characters with Figure 3.
Hu moments invariants
1 6.7E-004 3.7E-010 5.6E-015 2.1E-014 1.3E-028 2.5E-019 -1.8E-028
2 6.8E-004 4.9E-010 2.0E-015 7.8E-015 6.0E-030 1.0E-019 3.0E-029
2 6.9E-004 2.2E-010 2.3E-014 1.4E-014 2.4E-028 1.1E-019 -1.1E-028
4 6.8E-004 7.4E-011 8.9E-016 8.2E-015 4.8E-030 -5.7E-020 2.2E-029
4 7.0E-004 3.3E-010 7.8E-015 1.4E-014 -1.4E-028 -1.4E-019 -1.7E-029
8 6.9E-004 2.0E-010 2.2E-015 4.4E-015 6.7E-030 2.9E-020 -1.2E-029
x
l
= c +
l(d − c)
N − 1
y
k
= d −
k(d − c)
M − 1
(19)
for k = 0, . . . , M − 1 and l = 0, . . . , N − 1. c and d
are real numbers take values as shown in Figure 4.
To calculate the Zernike moments of an image
f(x, y) , the image is first mapped to the unit disk
using polar coordinates, where the centre of the
image is the origin of the unit disk. Those pixels
falling outside the unit disk are not used in the
calculation.
Figure 4: Mapping of a discrete image function a)c = −1,
d = 1 and b)c =
−1
√
2
, d =
1
√
2
Normalize the Zernike moments
Z
mn
=
Z
0
nm
m
00
(20)
where, Z
mn
is the Zernike moments.
Because Z
mn
is complex, we often use the Zernike
moments modules |Z
mn
| as the features of shape in
the recognition of pattern.
The magnitude of Zernike moments has rotational
invariant property. An image can be better described
by a small set of its Zernike moments than any other
types of moments such as geometric moments, Leg-
endre moments, rotational moments, and complex
moments in terms of mean-square error. Zernike mo-
ments do not have the properties of translation invari-
ance and scaling invariance. The way to achieve such
invariance is images translation and image normaliza-
tion before calculation of Zernike moments.
4 SIMILARITY MEASURE
Recognition is made by associating a feature vector
calculated for unknown character with a set of feature
vectors for a character obtained with a similar train-
ing set. The moment invariant approach to charac-
ter identification attempts to represent the pattern by
a set of K moments invariant-features, thus as a point
in K-dimensional feature space. Points corresponding
to patterns of the same class are assumed to be close
together, not close to those of different classes. The
similarity distance (d
m
) between two feature vectors
M
X
and M
Y
for a pair of character images X and Y
( C(ategory) code image X is identical to C(ategory)
code image Y ) is computed as the Euclidean distance
as follows
d
m
(M
X
, M
Y
) =
v
u
u
t
K
X
k=1
(m
X
k
− m
Y
k
)
2
(21)
The value of d
m
is zero or small for identical or
similar characters and high for different characters.
5 CONCLUSION
The main contribution of this paper is presentation
of character recognition using set of orthogonal
moments. In particular, we have constructed feature
vector by applying the normalized various moments.
This vector used in conjunction with a simple clas-
sification measure such as the Euclidean distance, is
capable of achieving satisfactory performance levels.
In our experiment we used our own database which
provides handwritten numerals from a hundred writ-
ers. Each numeral has 10 samples. Since the size of a
sample image varies, we first normalized each image
into the size of pixels. If the Hu moments invariants
ICINCO 2004 - SIGNAL PROCESSING, SYSTEMS MODELING AND CONTROL
106