2 RELATED WORK
2.1 Emotion
An emotion is a set of automatic responses to external
situations. There are bodily responses, of course:
facial movements, heart racing, gestures, sweat
running down the face. Each of us has already had
these experiences, whether it be during an oral exam
or a love encounter.
There are different human emotions, a group of
psychologists proposed a list of basic emotions
ranging from 2 to 18 categories, Ekman and his group
conducted various studies on facial expressions,
which led to the conclusion that there are six basic
emotions, also called primary emotions: joy, sadness,
surprise, fear, anger, and disgust.
Table 1 : different emotions proposed by psychologists
Authors Emotions
Izard (1977) Joy, surprise, anger, fear, sadness,
contempt, distress, interest, guilt,
shame,
Ekman
(1992)
Anger, fear, sadness, joy, disgust,
surprise
2.2 Comparison of Three Critical
Articles Related to Our
Contribution:
In one of the most-known works in emotion
recognition by Shervin Minaee, Amirali Abdolrashidi
have shown a Deep Learning alignment work. This is
a robust Convolutional NN-based face calibration
algorithm. They propose that the Deep Alignment
Network performs face calibrations primarily based
on the whole-facial images, as opposed to recent face
alignment algorithms, which makes it remarkably
accurate with large changes in both initializations and
forehead postures.
Mira Jeong and Byoung Chul work on emotion
recognition has defined the system “Affective
Computing” as the development of recognizable,
interpretable systems, devices, and mechanisms that
imitate a person's affects through various attributes
such as how they look, the depth and modulation of
their voice, and any biological signals they may have.
In their Article Kamran Ali, Likin Isler, and
Charles Hughes, have shown the facial expression
recognition system, which is a real-world application
and solves the phases that occurred in the post
changes made. The authors have generated several
new tests over FER Datasets on these phases and
proposed a new “Region Attention Network (RAN)”
which itself depicts the importance of the facial
landmarks.
2.3 Databases
The proposed databases for processing a facial
expression recognition system:
CK+: The extended Cohn-Kanade (known as CK+)
FER2013: The Facial Expression Recognition 2013
JAFFE: This Dataset contains 213 images of the
seven facial expressions posed by 10 Japanese
female models
FERG: FERG is a database of stylized characters
with annotated facial expressions.
The MMI database contains 213 image sequences,
of which 205 sequences with frontal view faces of
31 subjects were used in our experiment.
3 METHODS
HA-GAN: The fundamental goal of the HA-GAN
method is to learn how to encode expression
information from an input image and then transfer
that information to a produced image file. The
resulting expression images are then subjected to
facial expression recognition.
Hierarchical weighted random forest (WRF): For
real-time embedded systems, geometric features and
the hierarchical WRF is used.
Attentional Convolutional Network (ACN): an
attentional convolutional network technique that can
focus on parts with a lot of features
VGGNet is invented by the Visual Geometry Group
(in Oxford University)