
The second achievement of this work was to inte-
grate the implemented model into an application that
can be run on mobile devices. As mentioned, Tensor-
Flow Lite was the tool that allowed us to reduce the
disk size of the FER model, resulting in a 13.23MB
file. Also, as can be seen in the third part of the re-
sults, the tests on different devices were satisfactory.
Here you can see how the model manages to process a
prediction in an average of 14.39ms and 16.06ms, on
a tablet and a phone, respectively. This proves how
a low-density model can achieve high accuracy and
predictions in an instant.
For our future work, two points are in mind: First,
we will aim to obtain better accuracy in our model
training by adjusting the hyperparameters that have
been used (Leon-Urbano and Ugarte, 2020) or for
other applications (Cornejo et al., 2021). Similarly,
we think that using larger images, instead of the
48x48 ones, could help with this goal (Lozano-Mej
´
ıa
et al., 2020). Second, we plan to use the potential of
automatic facial expression recognition in a serious
game about emotions for children with autism. This
will allow us to increase the dynamism of the activi-
ties and demonstrate the capability of artificial intelli-
gence in human-computer interaction.
REFERENCES
Chuanjie, Z. and Changming, Z. (2020). Facial expression
recognition integrating multiple cnn models. In ICCC,
pages 1410–1414.
Cornejo, L., Urbano, R., and Ugarte, W. (2021). Mobile
application for controlling a healthy diet in peru using
image recognition. In FRUCT, pages 32–41. IEEE.
Dantas, A. C. and do Nascimento, M. Z. (2022). Recogni-
tion of emotions for people with autism: An approach
to improve skills. Int. J. Comput. Games Technol.,
2022:6738068:1–6738068:21.
Ekman, P. and Friesen, W. V. (1971). Constants across cul-
tures in the face and emotion. Journal of Personality
and Social Psychology, 17(2):124–129.
Farkhod, A., Abdusalomov, A. B., Mukhiddinov, M., and
Cho, Y. (2022). Development of real-time landmark-
based emotion recognition CNN for masked faces.
Sensors, 22(22):8704.
Garcia-Garcia, J. M., Penichet, V. M. R., Lozano, M. D.,
and Fernando, A. (2022). Using emotion recogni-
tion technologies to teach children with autism spec-
trum disorder how to identify and express emotions.
Univers. Access Inf. Soc., 21(4):809–825.
Hua, W., Dai, F., Huang, L., Xiong, J., and Gui, G. (2019).
HERO: human emotions recognition for realizing in-
telligent internet of things. IEEE Access, 7:24321–
24332.
Leon-Urbano, C. and Ugarte, W. (2020). End-to-end elec-
troencephalogram (EEG) motor imagery classification
with long short-term. In SSCI, pages 2814–2820.
IEEE.
Lozano-Mej
´
ıa, D. J., Vega-Uribe, E. P., and Ugarte, W.
(2020). Content-based image classification for sheet
music books recognition. In EirCON, pages 1–4.
IEEE.
Minaee, S., Minaei, M., and Abdolrashidi, A. (2021). Deep-
emotion: Facial expression recognition using atten-
tional convolutional network. Sensors, 21(9):3046.
Mollahosseini, A., Chan, D., and Mahoor, M. H. (2016).
Going deeper in facial expression recognition using
deep neural networks. In WACV, pages 1–10.
Murugappan, M. and Mutawa, A. (2021). Facial geomet-
ric feature extraction based emotional expression clas-
sification using machine learning algorithms. PLOS
ONE, 16(2):e0247131.
Nan, Y., Ju, J., Hua, Q., Zhang, H., and Wang, B. (2022).
A-mobilenet: An approach of facial expression recog-
nition. Alexandria Engineering Journal, 61(6):4435–
4444.
Pavez, R., D
´
ıaz, J., Arango-L
´
opez, J., Ahumada, D.,
M
´
endez-Sandoval, C., and Moreira, F. (2023). Emo-
mirror: a proposal to support emotion recognition in
children with autism spectrum disorders. Neural Com-
put. Appl., 35(11):7913–7924.
Singh, S. and Nasoz, F. (2020). Facial expression recogni-
tion with convolutional neural networks. In CCWC,
pages 0324–0328.
Ullah, Z., Ismail Mohmand, M., ur Rehman, S., Zubair,
M., Driss, M., Boulila, W., Sheikh, R., and Alwawi,
I. (2022). Emotion recognition from occluded facial
images using deep ensemble model. Computers, Ma-
terials & Continua, 73(3):4465–4487.
Verma, V. and Rani, R. (2021). Recognition of facial ex-
pressions using a deep neural network. In SPIN, pages
585–590.
Vulpe-Grigorasi, A. and Grigore, O. (2021). Convolutional
neural network hyperparameters optimization for fa-
cial emotion recognition. In ATEE, pages 1–5.
Wahab, M. N. A., Nazir, A., Ren, A. T. Z., Noor, M.
H. M., Akbar, M. F., and Mohamed, A. S. A. (2021).
Efficientnet-lite and hybrid CNN-KNN implementa-
tion for facial expression recognition on raspberry pi.
IEEE Access, 9:134065–134080.
Zarif, N. E., Montazeri, L., Leduc-Primeau, F., and Sawan,
M. (2021). Mobile-optimized facial expression recog-
nition techniques. IEEE Access, 9:101172–101185.
Zeng, G., Zhou, J., Jia, X., Xie, W., and Shen, L. (2018).
Hand-crafted feature guided deep learning for facial
expression recognition. In FG 2018, pages 423–430.
Zhou, N., Liang, R., and Shi, W. (2021). A lightweight
convolutional neural network for real-time facial ex-
pression detection. IEEE Access, 9:5573–5584.
ICT4AWE 2024 - 10th International Conference on Information and Communication Technologies for Ageing Well and e-Health
92