Authors:
Hussein Chaaban
;
Michèle Gouiffès
and
Annelies Braffort
Affiliation:
Université Paris-Saclay, CNRS, LIMSI, 91400, Orsay, France
Keyword(s):
LSF Videos, Annotations, Lexical Signs, Sign Segmentation.
Abstract:
The automatic recognition of Sign Languages is the main focus of most of the works in the field, which explains the progressing demand on the annotated data to train the dedicated models. In this paper, we present a semi automatic annotation system for Sign Languages. Such automation will not only help to create training data but it will reduce as well the processing time and the subjectivity of manual annotations done by linguists in order to study the sign language. The system analyses hand shapes, hands speed variations, and face landmarks to annotate base level features and to separate the different signs. In a second stage, signs are classified into two types, whether they are lexical (i.e. present in a dictionary) or iconic (illustrative), using a probabilistic model. The results show that our system is partially capable of annotating automatically the video sequence with a F1 score = 0.68 for lexical sign annotation and an error of 3.8 frames for sign segmentation. An expert v
alidation of the annotations is still needed.
(More)