
tively. These low rates show the importance of the
mouth to classify these two emotions. We observed
also if we use the down part of the face the classi-
fication rates of these emotions increase. So, to per-
fectly classify surprise and disgust emotions, we must
use three facial key parts (mouth, eyebrows and eyes)
which confirm our first method. Anger, fear and sad-
ness were defined according to our first method by
two key parts (eyes and eyebrows).
Table 7 shows high classification rates of anger
and fear emotions which prove the first method. How-
ever, the sadness emotion was classified by low clas-
sification rates. So, for this emotion the mouth is rel-
evant to better classify this emotion.
Finally, joy emotion was defined by one key part
(the mouth). The second method confirms this defini-
tion. When we hid the mouth, the emotion was classi-
fied with low classification rates(19% with KNN and
20% with SVM). However, table 7 shows high classi-
fication rates of this emotion when, we didn’t elimi-
nate the mouth.
5 CONCLUSION
The goal of this study is to define a primary emo-
tion (joy, fear, sadness...) by a minimum number of
characteristic points which will be very useful espe-
cially for real time applications. Our proposed system
consists of four phases: face detection, characteristic
points localization, information extraction and classi-
fication. We contribute to two levels: to assign the
most important facial key parts in order to perfectly
recognize a basic emotion and to reduce the num-
ber of characteristic points to define an emotion. The
rates in the experimental results present the effective-
ness of the suggested system.
ACKNOWLEDGEMENTS
The authors would like to acknowledge the financial
support of this work by grants from General Direction
of Scientific Research (DGRST), Tunisia, under the
ARUB.
REFERENCES
A. R. Taheri, M. Alemi, A. M. H. R. P. and Basiri, N. M.
(2014). Social robots as assistants for autism therapy
in Iran: Research in progress. International Confer-
ence on Robotics and Mechatronics (ICRoM), Tehran,
Iran, 2nd edition.
Abate1, A. F., Cimmino, L., Narducci, B.-C. M. F., and
Pop, F. (2022). The limitations for expression recog-
nition in computer vision introduced by facial masks.
In Multimedia Tools and Applications.
Bourke C., Douglas K., P. R. (2010). Processing of facial
emotion expression in major depression: a review. In
Aust. N.Z. J. Psychiatry.
Chung, C.-C., Lin, W.-T., Zhang, R., Liang, K.-W., and
Chang, P.-C. (2019). Emotion estimation by joint fa-
cial expression and speech tonality using evolutionary
deep learning structures. In IEEE Global Conference
on Consumer Electronics (GCCE), pages 12–14.
ELsayed.Y, ELSayed.A, and Abdou.M.A. (2023). An au-
tomatic improved facial expression recognition for
masked faces. In Neural Computing and Applications,
page 14963–14972.
F. De la Torre, W. S. Chu, X. X. F. V. X. D. and Cohn., J.
(2015). IntraFace. IEEE International Conference and
Workshops on Automatic Face and Gesture Recogni-
tion, Ljubljana, Slovenia,, 11th edition.
Foggia.P, Greco.A, Saggese.A, and Vento.M” (2023).
Multi-task learning on the edge for effective gender,
age, ethnicity and emotion recognition. In Eng. Appl.
Artif. Intell.
Fuletra, J.D.; Bosamiya, D. (2013). Int. j. recent innov.
trends comput. commun. In Int J of Soc Robotics.
Huang.ZY, Chiang.CC, and Chen.JH (2023). A study on
computer vision for facial emotion recognition. In Sci
Rep.
I.Khalifa, R.Ejbali, and , M. (2019). Body gesture mod-
eling for psychology analysis in job interview based
on deep spatio-temporal approach. Parallel and Dis-
tributed Computing, Applications and Technologies:
19th International Conference, PDCAT, Jeju Island,
South Korea.
Iqbal, J. M., Kumar, M. S., G.R., G. M., A.N., S., Karthik,
A., and N., B. (2023). Facial emotion recognition
using geometrical features based deep learning tech-
niques. In International Journal of Computers Com-
munications and Control.
Leong, S., Tang, Y. M., Lai, C. H., and Lee., C. (2023).
Using a social robot to teach gestural recognition and
production in children with autism spectrum disor-
ders, disability and rehabilitation. In Assistive Tech-
nology.
L.Nwosu, H.Wang, J.Lu, I.U.X.Yang, and T.Zhang (2017).
Deep Convolutional Neural Network for Facial Ex-
pression Recognition Using Facial Parts. Int. Conf.
Dependable, Autonomic and Secure Computing, Or-
lando, FL, USA, 15th edition.
Pawar, M. and Kokate, R. (2021). Convolution neural
network based automatic speech emotion recognition
using mel-frequency cepstrum coefficients. In Mul-
timed. Tools Appl. Springer.
Plutchik, R. (1980). A general psychoevolutionary theory
of emotion. In Theories of Emotion. Elsevier.
R.Afdhal, A.Bahar, R.Ejbali, and , M. (2015). Face de-
tection using beta wavelet filter and cascade classifier
entrained with Adaboost. Eighth International Con-
ference on Machine Vision (ICMV), BARCELONA,
SPAIN.
ICAART 2024 - 16th International Conference on Agents and Artificial Intelligence
1276