parameters that define the manual and non-manual
components. The manual component includes:
Configuration of the hand. In Portuguese sign
language there is a total of 57 identified hand
configurations.
Orientation of the palm of the hand. Some pairs
of configurations differ only in the palm’s
orientation.
Location of articulation (gestural space).
Movement of the hands.
The non-manual component comprises:
Body movement. The body movement is
responsible for introducing a temporal context.
Facial expressions. The facial expressions add
a sense of emotion to the speech.
3 RELATED WORK
In the last two decades a significant number of works
focusing on the development of techniques to
automate the translation of sign languages with
greater incidence for the American Sign Language
(Morrissey and Way, 2005), and the introduction of
serious games in the education of people with speech
and/or hearing disabilities (
Gameiro et al., 2014) have
been published.
Several of the methods proposed to perform
representation and recognition of sign language
gestures, apply some of the main state-of-the-art
techniques, involving segmentation, tracking and
feature extraction as well as the use of specific
hardware as depth sensors and data gloves.
The collected data is classified by applying a
random forests algorithm (Biau, 2012), yielding an
average accuracy rate of 49,5%.
Cooper et al. (
Cooper et al., 2011) use linguistic
concepts in order to identify the constituent features
of the gesture, describing the motion, location and
shape of the hand. These elements are combined
using HMM for gesture recognition. The recognition
rates of the gestures are in the order of 71,4%.
The project CopyCat (
Brashear et al., 2010) is an
interactive adventure and educational game with ASL
recognition. Colorful gloves equipped with
accelerometers are used in order to simplify the
segmentation of the hands and allow the estimation of
motion acceleration, direction and the rotation of the
hands. The data is classified using HMM, yielding an
accuracy of 85%.
ProDeaf is an application that does the translation
of Portuguese text or voice to Brazilian sign language
(ProDeaf, 2016). This project is very similar to one of
the main components used on the VirtualSign game,
which is the text to gesture translation. The objective
of ProDeaf is to make the communication between
mute and deaf people easier by making digital content
accessible in Brazilian sign language. The translation
is done using a 3D avatar that performs the gestures.
ProDeaf already has over 130 000 users.
Showleap is a recent Spanish Sign language
translator (Showleap, 2016), it claims to translate sign
language to voice and voice into sign language. So far
Showleap uses the Leap motion which is a piece of
hardware capable of detecting hands through the use
of two monochromatic IR cameras and three infrared
LEDs and showleap uses also the Myo armband . This
armband is capable of detecting the arm motion,
rotation and some hand gestures through
electromyographic sensors that detect electrical
signals from the muscles of the arm. So far Showleap
has no precise results on the translation and the
creators claim that the product is 90% done
(Showleap, 2015).
Motionsavvy Uni is another sign language
translator that makes use of the leapmotion
(Motionsavvy, 2016). This translator converts
gestures into text and voice and voice into text. Text
and voice are not converted into sign language with
Uni. The translator has been designed to be built into
a tablet. Uni claims to have 2000 signs on launch and
allows users to create their own signs.
Two university students at Washington University
won the Lemelson-MIT Student Prize by creating a
prototype of a glove that can translate sign language
into speech or text (University of Washington, 2016).
The gloves have sensors in both the hands and the
wrist from where the information of the hand
movement and rotation is retrieved. There is no clear
results yet as the project is a recent prototype.
4 VirtualSign TRANSLATOR
VirtualSign aims to contribute to a greater social
inclusion for the deaf through the creation of a bi-
direction translator between sign language and text.
In addition a serious game was also developed in
order to assist in the process of learning sign
language.
The project bundles three interlinked modules:
Translator of Sign Language to Text:
module responsible for the capture,
interpretation and translation of sign language
gestures to text. A pair of sensors gloves (5DT
Data Gloves) provides input about the
configuration of the hands while the Microsoft