Recognition. Additionally, it has sensitivity to noise,
that is, if in the place where we are there is too much
noise around, the Nao will not be able to detect the
voice of the person or may have difficulties. Testing
the code was difficult at this stage because it’s not pos-
sible to test with Choreography if input data in audio
format is needed. Also, the person cannot be at a great
distance from the microphones of the Robot (which
are in his head), otherwise, the listening of this will
be low and may have problems to understand the mes-
sage. According to this, we noticed that the Nao robot
has a greater facility to capture numbers when listen-
ing to them, than large words. You are more likely to
ask to repeat the word than to ask to repeat the dic-
tated number. Another limitation is related to Internet
access, because if it were possible to access the In-
ternet connected to the Nao robot, our full flow time
will be severely reduced. Finally, special care must
be taken when training with the classification algo-
rithms, because some datasets can generate overtrain-
ing, which would generate irregular results (Burga-
Gutierrez et al., 2020).
For future improvements, more diseases can be
added to the dataset so that it can cover a larger field
and can be run again with the same algorithm. Al-
though our premise is that the input data is said aloud,
our flow time can decrease if the patient, instead of
dictating the symptoms one by one, can have a table
with the total symptom numbers and tell the numbers
to the robot. Similar to this, the reception of the tele-
phone number generates that the time flow increases
considerably. We suggest that this data can be typed
and the results can be sent to the doctor if applica-
ble. Additionally, by skipping this step, we can avoid
scaring the patient, because we do not know how sen-
sitive he may be and may even misinterpret the robot’s
comments on the results. Furthermore, combining our
approach with other kinds of smart health allocation
systems (Ugarte, 2022).
REFERENCES
Arslan, H. (2021). COVID-19 prediction based on genome
similarity of human sars-cov-2 and bat sars-cov-like
coronavirus. Comput. Ind. Eng., 161.
Barrutia-Barreto, I., S
´
anchez-S
´
anchez, R. M., and Silva-
Marchan, H. A. (2021). Consecuencias econ
´
omicas
y sociales de la inamovilidad humana bajo covid – 19
caso de estudio per
´
u. Lecturas de Econom
´
ıa, 1(94).
Brunese, L., Martinelli, F., Mercaldo, F., and Santone, A.
(2020). Machine learning for coronavirus covid-19
detection from chest x-rays. Procedia Computer Sci-
ence, 176.
Burga-Gutierrez, E., Vasquez-Chauca, B., and Ugarte, W.
(2020). Comparative analysis of question answering
models for HRI tasks with NAO in spanish. In SIM-
Big.
Burns, R. B., Lee, H., Seifi, H., Faulkner, R., and Kuchen-
becker, K. J. (2022). Endowing a NAO robot with
practical social-touch perception. Frontiers Robotics
AI, 9.
Fale, M. I. and Gital, A. Y. (2022). Dr. flynxz - A first aid
mamdani-sugeno-type fuzzy expert system for differ-
ential symptoms-based diagnosis. J. King Saud Univ.
Comput. Inf. Sci., 34(4).
Filippini, C., Perpetuini, D., Cardone, D., and Merla, A.
(2021). Improving human-robot interaction by en-
hancing NAO robot awareness of human facial expres-
sion. Sensors, 21(19).
Gianella, C., Gideon, J., and Romero, M. J. (2021). What
does covid-19 tell us about the peruvian health sys-
tem? Canadian Journal of Development Studies / Re-
vue canadienne d’
´
etudes du d
´
eveloppement, 42(1-2).
H
¨
ansch, R. (2021). Handbook of Random Forests - Theory
and Applications for Remote Sensing. Series in Com-
puter Vision. WorldScientific.
Hoffmann, M., Wang, S., Outrata, V., Alzueta, E., and
Lanillos, P. (2021). Robot in the mirror: Toward
an embodied computational model of mirror self-
recognition. K
¨
unstliche Intell., 35(1).
Miyahira, J. (2020). Lo que nos puede traer la pandemia.
Revista Medica Herediana, 31(2).
Rehman, M., Shah, R. A., Khan, M. B., Shah, S. A., Abuali,
N. A., Yang, X., Alomainy, A., Imran, M. A., and
Abbasi, Q. H. (2021). Improving machine learning
classification accuracy for breathing abnormalities by
enhancing dataset. Sensors, 21(20).
Romero-Garc
´
ıa, R., Mart
´
ınez-Tom
´
as, R., Pozo, P., de la
Paz, F., and Sarri
´
a, E. (2021). Q-CHAT-NAO: A
robotic approach to autism screening in toddlers. J.
Biomed. Informatics, 118.
Rozo, A., Buil, J., Moeyersons, J., Morales, J. F., van der
Westen, R. G., Lijnen, L., Smeets, C., Jantzen, S.,
Monpellier, V., Ruttens, D., Hoof, C. V., Huffel, S. V.,
Groenendaal, W., and Varon, C. (2021). Controlled
breathing effect on respiration quality assessment us-
ing machine learning approaches. In IEEE CinC.
Shotton, J., Sharp, T., Kohli, P., Nowozin, S., Winn, J. M.,
and Criminisi, A. (2013). Decision jungles: Compact
and rich models for classification. In NIPS.
Ugarte, W. (2022). Vaccination planning in peru using con-
straint programming. In ICAART.
Yoon, Y., Ko, W., Jang, M., Lee, J., Kim, J., and Lee, G.
(2019). Robots learn social skills: End-to-end learn-
ing of co-speech gesture generation for humanoid
robots. In IEEE ICRA.
Classification of Respiratory Diseases Using the NAO Robot
947