
human-nao interaction research: A scoping review.
Frontiers In Robotics And AI, 8:744526.
Banaeian, H. and Gilanlioglu, I. (2021). Influence of the nao
robot as a teaching assistant on university students’
vocabulary learning and attitudes. Australasian Jour-
nal of Educational Technology, 37(3):71–87.
Baraka, K., Alves-Oliveira, P., and Ribeiro, T. (2020). An
extended framework for characterizing social robots.
Human-Robot Interaction: Evaluation Methods and
Their Standardization, pages 21–64.
Bartneck, C., Kuli
´
c, D., Croft, E., and Zoghbi, S. (2009).
Measurement instruments for the anthropomorphism,
animacy, likeability, perceived intelligence, and per-
ceived safety of robots. International journal of social
robotics, 1:71–81.
Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati,
B., and Tanaka, F. (2018). Social robots for education:
A review. Science robotics, 3(21):eaat5954.
Breazeal, C., Dautenhahn, K., and Kanda, T. (2016). Social
robotics. Springer handbook of robotics, pages 1935–
1972.
Broekens, J., Heerink, M., Rosendal, H., et al. (2009). As-
sistive social robots in elderly care: a review. Geron-
technology, 8(2):94–103.
Brooke, J. (1996). SUS: A “quick and dirty” usability scale.
Usability Evaluation in INdustry/Taylor and Francis.
Brooke, J. (2013). SUS: a retrospective. Journal of usability
studies, 8(2).
Campos, J., Kennedy, J., and Lehman, J. F. (2018). Chal-
lenges in exploiting conversational memory in human-
agent interaction. In Proceedings of the 17th Interna-
tional Conference on Autonomous Agents and Multi-
Agent Systems, pages 1649–1657.
Chai, J. Y., She, L., Fang, R., Ottarson, S., Littley, C., Liu,
C., and Hanson, K. (2014). Collaborative effort to-
wards common ground in situated human-robot dia-
logue. In Proceedings of the 2014 ACM/IEEE interna-
tional conference on Human-robot interaction, pages
33–40.
Coghlan, S., Leins, K., Sheldrick, S., Cheong, M., Gooding,
P., and D’Alfonso, S. (2023). To chat or bot to chat:
Ethical issues with using chatbots in mental health.
Digital health, 9:20552076231183542.
Deriu, J., Rodrigo, A., Otegi, A., Echegoyen, G., Rosset,
S., Agirre, E., and Cieliebak, M. (2021). Survey on
evaluation methods for dialogue systems. Artificial
Intelligence Review, 54:755–810.
Dino, F., Zandie, R., Abdollahi, H., Schoeder, S., and
Mahoor, M. H. (2019). Delivering cognitive behav-
ioral therapy using a conversational social robot. In
2019 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems (IROS), pages 2089–2095.
IEEE.
Fong, T., Nourbakhsh, I., and Dautenhahn, K. (2003). A
survey of socially interactive robots: Concepts, de-
sign, and applications. Robotics and autonomous sys-
tems, 42:3–4.
Johansson, M. and Skantze, G. (2015). Opportunities and
obligations to take turns in collaborative multi-party
human-robot interaction. In Proceedings of the 16th
Annual Meeting of the Special Interest Group on Dis-
course and Dialogue, pages 305–314.
Kr
¨
amer, N. C., Hoffmann, L., and Kopp, S. (2010). Know
your users! empirical results for tailoring an agent s
nonverbal behavior to different user groups. In Intel-
ligent Virtual Agents: 10th International Conference,
IVA 2010, Philadelphia, PA, USA, September 20-22,
2010. Proceedings 10, pages 468–474. Springer.
Lemaignan, S., Jacq, A., Hood, D., Garcia, F., Paiva, A.,
and Dillenbourg, P. (2016). Learning by teaching a
robot: The case of handwriting. IEEE Robotics & Au-
tomation Magazine, 23(2):56–66.
Lewis, J. R. (2012). Usability testing. Handbook of human
factors and ergonomics, pages 1267–1312.
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin,
V., Goyal, N., K
¨
uttler, H., Lewis, M., Yih, W.-t.,
Rockt
¨
aschel, T., et al. (2020). Retrieval-augmented
generation for knowledge-intensive NLP tasks. Ad-
vances in Neural Information Processing Systems,
33:9459–9474.
Namlisesli, D., Bas¸, H. N., Bostancı, H., Cos¸kun, B.,
Barkana, D. E., and Tarakc¸ı, D. (2024). The effect of
use of social robot nao on children’s motivation and
emotional states in special education. In 2024 21st
International Conference on Ubiquitous Robots (UR),
pages 7–12. IEEE.
OpenAI (2023). GPT-4 technical report (arxiv:
2303.08774).
Pino, O., Palestra, G., Trevino, R., and De Carolis, B.
(2020). The humanoid robot nao as trainer in a mem-
ory program for elderly people with mild cognitive
impairment. International Journal of Social Robotics,
12:21–33.
Pulido, J. C., Gonz
´
alez, J. C., Su
´
arez-Mej
´
ıas, C., Bandera,
A., Bustos, P., and Fern
´
andez, F. (2017). Evaluating
the child–robot interaction of the naotherapist plat-
form in pediatric rehabilitation. International Journal
of Social Robotics, 9:343–358.
Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey,
C., and Sutskever, I. (2022). Robust speech recogni-
tion via large-scale weak supervision.
Reimann, M. M., Kunneman, F. A., Oertel, C., and Hin-
driks, K. V. (2024). A survey on dialogue manage-
ment in human-robot interaction. ACM Transactions
on Human-Robot Interaction.
Rudin, C. (2019). Stop explaining black box machine learn-
ing models for high stakes decisions and use inter-
pretable models instead. Nature machine intelligence,
1(5):206–215.
Sainburg, T. (2019). timsainb/noisereduce: v1.0.
Sainburg, T., Thielk, M., and Gentner, T. Q. (2020).
Finding, visualizing, and quantifying latent structure
across diverse animal vocal repertoires. PLoS compu-
tational biology, 16(10):e1008228.
Sauro, J. and Lewis, J. R. (2016). Quantifying the user expe-
rience: Practical statistics for user research. Morgan
Kaufmann.
Si, W. M., Backes, M., Blackburn, J., De Cristofaro, E.,
Stringhini, G., Zannettou, S., and Zhang, Y. (2022).
IAI 2025 - Special Session on Interpretable Artificial Intelligence Through Glass-Box Models
878