Authors:
Ilya Afanasyev
and
Mariolino De Cecco
Affiliation:
University of Trento, Italy
Keyword(s):
Superquadrics, Gesture Recognition, Microsoft Kinect, RANSAC Fitting, 3D Object Localization.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Features Extraction
;
Geometry and Modeling
;
Image and Video Analysis
;
Image-Based Modeling
;
Motion, Tracking and Stereo Vision
;
Pattern Recognition
;
Robotics
;
Segmentation and Grouping
;
Shape Representation and Matching
;
Software Engineering
;
Stereo Vision and Structure from Motion
Abstract:
This paper presents 3D gesture recognition and localization method based on processing 3D data of hands in color gloves acquired by 3D depth sensor, like Microsoft Kinect. RGB information of every 3D datapoints is used to segment 3D point cloud into 12 parts (a forearm, a palm and 10 for fingers). The object (a hand with fingers) should be a-priori known and anthropometrically modeled by SuperQuadrics (SQ) with certain scaling and shape parameters. The gesture (pose) is estimated hierarchically by RANSAC-object search with a least square fitting the segments of 3D point cloud to corresponding SQ-models: at first – a pose of the hand (forearm & palm), and then positions of fingers. The solution is verified by evaluating the matching score, i.e. the number of inliers corresponding to the appropriate distances from SQ surfaces and 3D datapoints, which are satisfied to an assigned distance threshold.