Hand Pose Recognition by using Masked Zernike Moments
JungSoo Park, Hyo-Rim Choi, JunYoung Kim and TaeYong Kim
GSAIM, Chung-Ang University, 221 Heuksuk-Dong, Seoul, Republic of Korea
Keywords: Hand Gesture Recognition, Pose Recognition, Zernike Moments, Shape Representation.
Abstract: In this paper we present a novel way of applying Zernike moments for image matching. Zernike moments
are obtained from projecting image information under a circumscribed circle to Zernike basis function.
However, the problem is that the power of discrimination may be reduced because hand images include lots
of overlapped information due to their shape characteristic. On the other hand, in the pose discrimination
shape information of hands excluding the overlapped area can increase the power of discrimination. In order
to solve the overlapped information problem, we present a way of applying subtraction masks. Internal
mask R1 eliminates overlapped information in hand images, while external mask R2 weighs outstanding
features of hand images. Mask R3 combines the results from the image masked by R1 and the image
masked by R2. The moments obtained by R3 mask increase the accuracy of discrimination for hand poses,
which is shown in experiments by comparing conventional methods.
1 INTRODUCTION
One of the most popular human computer interaction
(HCI) techniques is based on the vision system,
which can be used easily in various environment.
Among various vision based gesture recognition
methods, a hand gesture method is widely used due
to the superiority in representative ability.
For a hand gesture interface based on the vision,
following steps must be undergone: First, from an
input image we should extract hand region aginst
background. However, for an image in actual
environment it is difficult to perfectly extract the
hand region from background, because of noises
originating from illumination or color (Yun, 2010).
Second, the extracted image must be recognized
perfectly by the shape of a hand. However, many
related works are focused on the hand gesture
recognition based on the number of fingers than the
shape of a hand. Third, on recognizing the hand
gesture an extracted image should be recognized
robustly aginst noise and the recognition method
must be invariant to rotation, translation and scale
changes. In order to succeed those three steps, the
depth information by Kinect camera is used to allow
us to extract a hand region easily and robustly. For
recognizing hand shape, rotational invariant Zernike
moment (Khotanzad, 1990) is used. Zernike
moments’ superiorities are proved on noise
characteristics, little redundant information, and the
ability of presenting image (Teh, 1988). In general,
Zernike moments are obtained by projecting hand
image information onto a circumscribed circle by
Zernike basis functions. However, the images of
various hand shpaes are overlapped in the center
area. So it will not be possible to get the differences
of poses from this point of view, and this similarity
of poses reduces the power of discrimination. On the
other hand, the shape of outer region can increase
the power of discrimination for hand poses.
In this paper, we propose masks that eliminate
overlapped image information and emphasize
important shape information on Zernike moments,
which improve the acuuracy of the pose detection
with Principal Component Analysis (Swets, 1996).
2 ZERNIKE MOMENTS
Zernike moments are rotation invariant descriptors
and can be scale and translation invariant through
normalization. A method by the moments is robust
to noise and can represent image information
effectively by a few values, which are widely used
in the pattern recognition and image representation.
Zernike values are considered to be the result of
projection of an image under the basis function.
551
Park J., Choi H., Kim J. and Kim T..
Hand Pose Recognition by using Masked Zernike Moments.
DOI: 10.5220/0004731605510556
In Proceedings of the 9th International Conference on Computer Vision Theory and Applications (VISAPP-2014), pages 551-556
ISBN: 978-989-758-003-1
Copyright
c
2014 SCITEPRESS (Science and Technology Publications, Lda.)