Authors:
Adel Saleh
1
;
Hatem A. Rashwan
1
;
Mohamed Abdel-Nasser
2
;
Vivek K. Singh
1
;
Saddam Abdulwahab
1
;
Md. Mostafa Kamal Sarker
1
;
Miguel Angel Garcia
3
and
Domenec Puig
1
Affiliations:
1
Department of Computer Engineering and Mathematics, Rovira i Virgili University, Tarragona and Spain
;
2
Department of Computer Engineering and Mathematics, Rovira i Virgili University, Tarragona, Spain, Electrical Engineering Department, Aswan University, 81542 Aswan and Egypt
;
3
Department of Electronic and Communications Technology, Autonomous University of Madrid, Madrid and Spain
Keyword(s):
Semantic Segmentation, Fully Convolutional Network, Pixel-wise Classification, Finger Parts.
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Image and Video Analysis
;
Segmentation and Grouping
Abstract:
Image semantic segmentation is in the center of interest for computer vision researchers. Indeed, huge number of applications requires efficient segmentation performance, such as activity recognition, navigation, and human body parsing, etc. One of the important applications is gesture recognition that is the ability to understanding human hand gestures by detecting and counting finger parts in a video stream or in still images. Thus, accurate finger parts segmentation yields more accurate gesture recognition. Consequently, in this paper, we highlight two contributions as follows: First, we propose data-driven deep learning pooling policy based on multi-scale feature maps extraction at different scales (called FinSeg). A novel aggregation layer is introduced in this model, in which the features maps generated at each scale is weighted using a fully connected layer. Second, with the lack of realistic labeled finger parts datasets, we propose a labeled dataset for finger parts segmenta
tion (FingerParts dataset). To the best of our knowledge, the proposed dataset is the first attempt to build a realistic dataset for finger parts semantic segmentation. The experimental results show that the proposed model yields an improvement of 5% compared to the standard FCN network.
(More)