
Kostova, S. (2022). Making humanoid robots teaching
assistants by using natural language processing (nlp)
cloud-based services. Journal of Mechatronics and
Artificial Intelligence in Engineering, 3(1):30–39.
Luna, K. L., Palacios, E. R., and Marin, A. (2018). A
fuzzy speed controller for a guide robot using an
hri approach. IEEE Latin America Transactions,
16(8):2102–2107.
Morales, Y., Satake, S., Kanda, T., and Hagita, N. (2014).
Building a model of the environment from a route per-
spective for human–robot interaction. International
Journal of Social Robotics, 7:165–181.
Okuno, Y., Kanda, T., Imai, M., Ishiguro, H., and Hagita, N.
(2009). Rproviding route directions: Design of robot’s
utterance, gesture, and timing. In HRI ’09: Proceed-
ings of the 4th ACM/IEEE International Conference
on Human Robot Interaction, pages 53–60.
OpenAI (2023). GPT-4 Technical Report.
Oßwald, S., Kretzschmar, H., Burgard, W., and Stach-
niss, C. (2014). Learning to give route directions
from human demonstrations. In 2014 IEEE In-
ternational Conference on Robotics and Automation
(ICRA), pages 3303–3308.
Pitsch, K. and Wrede, S. (2014). When a robot orients
visitors to an exhibit. referential practices and inter-
actional dynamics in real world hri. In The 23rd IEEE
International Symposium on Robot and Human Inter-
active Communication, pages 36–42.
Radford, A., Kim, J. W., amd A. Ramesh, C. H., Goh, G.,
Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark,
J., Krueger, G., and Sutskever, I. (2021). Learning
transferable visual models from natural language su-
pervision. In Proceedings of the 38th International
Conference on Machine Learning, pages 8748–8763.
Radford, A., Narasimhan, K., Salimans, T., and Sutskever,
I. (2018). Improving Language Understanding by
Generative Pre-Training.
Richter, K. (2008). Context-specific route directions. KI,
22:39–40.
Richter, K. and Klippel, A. (2005). A model for context-
specific route directions. In Spatial Cognition IV. Rea-
soning, Action, Interaction, pages 58–78.
Rosenthal, S., Vichivanives, P., and Carter, E. (2022). The
impact of route descriptions on human expectations
for robot navigation. ACM Transactions on Human-
Robot Interactions, 11(35):1–19.
Russo, D., Zlatanova, S., and Clementini, E. (2014). Route
directions generation using visible landmarks. In ISA
’14: Proceedings of the 6th ACM SIGSPATIAL In-
ternational Workshop on Indoor Spatial Awareness,
pages 1–8.
Salem, M., Rohlfing, K., Kopp, S., and Joublin, F. (2011).
A friendly gesture: Investigating the effect of multi-
modal robot behavior in human-robot interaction. In
2011 20th IEEE International Conference on Robot
and Human Interactive Communication (RO-MAN),
pages 247–252.
Shah, D., Eysenbach, B., Kahn, G., Rhinehart, N., and
Levine, S. (2021). Ving: Learning open-world
navigation with visual goals. In 2021 IEEE In-
ternational Conference on Robotics and Automation
(ICRA), pages 13215–13222.
Shah, D., Osi
´
nski, B., Ichter, B., and Levine, S. (2023). Lm-
nav: Robotic navigation with large pre-trained models
of language, vision, and action. In Proceedings of the
6th Conference on Robot Learning, pages 492–504.
Triebel, R., Arras, K., Alami, R., Beyer, L., Breuers, S.,
Chatila, R., Chetouani, M., Cremers, D., Evers, V.,
Fiore, M., Hung, H., Ram
´
ırez, O. A. I., Joosse, M.,
Khambhaita, H., Kucner, T., Leibe, B., Lilienthal,
A. J., Linder, T., Magnusson, M., Okal, B., Palmieri,
L., Rafi, U., van Rooij, M., and Zhang, L. (2016).
Spencer: A socially aware service robot for passenger
guidance and help in busy airports. In Field and Ser-
vice Robotics: Results of the 10th International Con-
ference, pages 607–622.
Waldhart, J., Clodic, A., and Alami, R. (2019). Reasoning
on shared visual perspective to improve route direc-
tions. In 2019 28th IEEE International Conference on
Robot and Human Interactive Communication (RO-
MAN), pages 1–8.
Wall
´
en, J. (2008). The History of the Industrial Robot.
Link
¨
oping: Link
¨
oping University Electronic Press.
Zhang, H. and Ye, C. (2019). Human-robot interaction for
assisted wayfinding of a robotic navigation aid for the
blind. In 2019 12th International Conference on Hu-
man System Interaction (HSI), pages 137–142.
Zhang, Z., Chai, W., and Wang, J. (2023). Mani-gpt: A
generative model for interactive robotic manipulation.
Procedia Computer Science, 226:149–156.
Zheng, J. Y. and Tsuji, S. (1992). Panoramic representation
for route recognition by a mobile robot. International
Journal of Computer Vision, 9:55–76.
RoDiL: Giving Route Directions with Landmarks by Robots
309