
These enhancements will allow the IVA to perform
more sophisticated data analyses, providing users
with deeper insights and more actionable information.
As we continue to refine and expand the IVA sys-
tem, integrating advanced NLP, IoT, and data analyt-
ics technologies will lead to more intelligent and in-
tuitive virtual assistants. These advancements are ex-
pected to enhance smart home and vehicle environ-
ments, contributing to a more connected and efficient
future. The ongoing research and development efforts
underscore the potential of this integration, setting the
stage for future improvements in intelligent virtual as-
sistant technologies.
ACKNOWLEDGEMENTS
This research was conducted with the support of the
Core Program under the National Research Develop-
ment and Innovation Plan 2022-2027. The project, ti-
tled “Contributions to the Consolidation of Emerging
Technologies Specific to the Internet of Things and
Complex Systems,” is funded by the Ministry of Re-
search, Innovation and Digitization (MCID), project
number 23 38 01 01.
REFERENCES
Abdi, N., Zhan, X., Ramokapane, K. M., and Such, J.
(2021). Privacy norms for smart home personal as-
sistants. In Proceedings of the 2021 CHI Conference
on Human Factors in Computing Systems, CHI ’21,
New York, NY, USA. Association for Computing Ma-
chinery.
acon96 (2024). Home-llm. https://github.com/acon96/
home-llm. Accessed: 2024-06-25.
Chen, Z., Zhou, K., Zhang, B., Gong, Z., Zhao, W. X.,
and Wen, J.-R. (2023). Chatcot: Tool-augmented
chain-of-thought reasoning on chat-based large lan-
guage models. arXiv, 2305.14323.
Demir, E., K
¨
oseo
˘
glu, E., Sokullu, R., and S¸eker, B. (2017).
Smart home assistant for ambient assisted living of el-
derly people with dementia. Procedia Computer Sci-
ence, 113:609–614. The 8th International Conference
on Emerging Ubiquitous Systems and Pervasive Net-
works (EUSPN 2017) / The 7th International Confer-
ence on Current and Future Trends of Information and
Communication Technologies in Healthcare (ICTH-
2017) / Affiliated Workshops.
Du, H., Feng, X., Ma, J., Wang, M., Tao, S., Zhong, Y., Li,
Y.-F., and Wang, H. (2024). Towards proactive inter-
actions for in-vehicle conversational assistants utiliz-
ing large language models. arXiv, 2403.09135.
Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y.,
Dai, Y., Sun, J., Guo, Q., Wang, M., and Wang, H.
(2023). Retrieval-augmented generation for large lan-
guage models: A survey. ArXiv, abs/2312.10997.
Hikhvar (2018). Mqtt2prometheus. https://github.com/
hikhvar/mqtt2prometheus. Accessed: 2024-06-25.
Inaba, T., Kiyomaru, H., Cheng, F., and Kurohashi, S.
(2023). Multitool-cot: Gpt-3 can use multiple ex-
ternal tools with chain of thought prompting. arXiv,
2305.16896.
Islam, M. T., Azad, M. S., Ahammed, M. S., Rahman,
M. W., Azad, M. M., and Nasir, M. K. (2022). Iot
enabled virtual home assistant using raspberry pi. In
Majhi, S., Prado, R. P. d., and Dasanapura Nanjunda-
iah, C., editors, Distributed Computing and Optimiza-
tion Techniques, pages 559–570, Singapore. Springer
Nature Singapore.
Kim, S., Joo, S. J., Kim, D., Jang, J., Ye, S., Shin, J., and
Seo, M. (2023). The cot collection: Improving zero-
shot and few-shot learning of language models via
chain-of-thought fine-tuning. arXiv, abs/2305.14045.
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin,
V., Goyal, N., Kuttler, H., Lewis, M., tau Yih, W.,
Rockt
¨
aschel, T., Riedel, S., and Kiela, D. (2020).
Retrieval-augmented generation for knowledge-
intensive nlp tasks. ArXiv, abs/2005.11401.
Liu, J., Zhang, C., Qian, J., Ma, M., Qin, S., Bansal,
C., Lin, Q., Rajmohan, S., and Zhang, D. (2024).
Large language models can deliver accurate and inter-
pretable time series anomaly detection. arXiv preprint
arXiv:2405.15370.
Ouyang, X. and Srivastava, M. (2024). Llmsense: Harness-
ing llms for high-level reasoning over spatiotemporal
sensor traces.
Qin, Y., Hu, S., Lin, Y., Chen, W., Ding, N., Cui, G., Zeng,
Z., Huang, Y., Xiao, C., Han, C., Fung, Y. R., Su,
Y., Wang, H., Qian, C., Tian, R., Zhu, K., Liang, S.,
Shen, X., Xu, B., Zhang, Z., Ye, Y., Li, B., Tang, Z.,
Yi, J., Zhu, Y., Dai, Z., Yan, L., Cong, X., Lu, Y.,
Zhao, W., Huang, Y., Yan, J., Han, X., Sun, X., Li, D.,
Phang, J., Yang, C., Wu, T., Ji, H., Liu, Z., and Sun,
M. (2023a). Tool learning with foundation models.
arXiv, 2304.08354.
Qin, Y., Liang, S., Ye, Y., Zhu, K., Yan, L., Lu, Y.-T.,
Lin, Y., Cong, X., Tang, X., Qian, B., Zhao, S., Tian,
R., Xie, R., Zhou, J., Gerstein, M. H., Li, D., Liu,
Z., and Sun, M. (2023b). Toolllm: Facilitating large
language models to master 16000+ real-world apis.
ArXiv, abs/2307.16789.
Xu, W., Liu, M., Sokolsky, O., Lee, I., and Kong, F.
(2024). Llm-enabled cyber-physical systems: Survey,
research opportunities, and challenges. International
Workshop on Foundation Models for Cyber-Physical
Systems.
Zhou, Y., Maharjan, S., and Liu, B. (2023). Scalable
prompt generation for semi-supervised learning with
language models. ArXiv, abs/2302.09236.
Zhuang, Y., Yu, Y., Wang, K., Sun, H., and Zhang, C.
(2023). Toolqa: A dataset for llm question answering
with external tools. In Oh, A., Naumann, T., Glober-
son, A., Saenko, K., Hardt, M., and Levine, S., editors,
Advances in Neural Information Processing Systems,
volume 36, pages 50117–50143. Curran Associates,
Inc.
Enhancing IoT Interactions with Large Language Models: A Progressive Approach
1033