daily lives, facilitating access to relevant health
information quickly and effectively.
3.3 Limitations and Future Work
One of the limitations of this study is that it has not
been clinically validated. Therefore, in future work,
we would like to evaluate LLM clinically. The current
stage of development serves as an opportunity to
conduct tests and consequent improvements, which
will guarantee the effectiveness of the assistant when
it is introduced to the market.
ACKNOWLEDGEMENTS
This work was supported by Portuguese funds
through the Institute of Electronics and Informatics
Engineering of Aveiro (IEETA) (UIDB/00127/2020)
funded by national funds through FCT - Foundation
for Science and Technology. This work has also been
done under the scope of – and funded by – the Health
Data Science Ph.D. Program of the Faculty of
Medicine of the University of Porto, Portugal –
heads.med.up.pt. The authors are thankful to the
Program and its faculty for investing in their students
and funding the open-access publication of their
research. Additionally, this article was developed
under the Mobilizing Agenda No. 41 - HfPT –
HEALTH FROM PORTUGAL, co-funded by the
PRR and the European Union through the Next
Generation EU mechanism.
REFERENCES
AnythingLLM. (2024). AnythingLLM.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan,
J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G.,
Henighan, T., Child, R., Ramesh, A., Ziegler, D. M.,
Wu, J., Winter, C., … Amodei, D. (2020). Language
Models are Few-Shot Learners. http://arxiv.org/abs/
2005.14165
Cascella, M., Semeraro, F., Montomoli, J., Bellini, V.,
Piazza, O., & Bignami, E. (2024). The Breakthrough of
Large Language Models Release for Medical
Applications: 1-Year Timeline and Perspectives. In
Journal of Medical Systems (Vol. 48, Issue 1). Springer.
https://doi.org/10.1007/s10916-024-02045-3
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018).
BERT: Pre-training of Deep Bidirectional
Transformers for Language Understanding.
http://arxiv.org/abs/1810.04805
Dozza, M., Bärgman, J., & Lee, J. D. (2013). Chunking: A
procedure to improve naturalistic data analysis.
Accident Analysis and Prevention, 58, 309–317.
https://doi.org/10.1016/j.aap.2012.03.020
Element Labs, Inc. (2023). LM Studio (4).
Flowise. (2024). Flowise.
Maresova, P., Javanmardi, E., Barakovic, S., Barakovic
Husic, J., Tomsone, S., Krejcar, O., & Kuca, K. (2019).
Consequences of chronic diseases and other limitations
associated with old age - A scoping review. BMC
Public Health, 19(1). https://doi.org/10.1186/s12889-
019-7762-5
Mehandru, N., Miao, B. Y., Almaraz, E. R., Sushil, M.,
Butte, A. J., & Alaa, A. (2024). Evaluating large
language models as agents in the clinic. In npj Digital
Medicine (Vol. 7, Issue 1). Nature Research.
https://doi.org/10.1038/s41746-024-01083-y
Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M.,
Socher, R., Amatriain, X., & Gao, J. (2024). Large
Language Models: A Survey. http://arxiv.org/
abs/2402.06196
Nassiri, K., & Akhloufi, M. A. (2024). Recent Advances in
Large Language Models for Healthcare.
BioMedInformatics, 4(2), 1097–1143. https://doi.org/
10.3390/biomedinformatics4020062
Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S.,
Usman, M., Akhtar, N., Barnes, N., & Mian, A. (2023).
A Comprehensive Overview of Large Language
Models. http://arxiv.org/abs/2307.06435
Park, J. S., O’Brien, J., Cai, C. J., Morris, M. R., Liang, P.,
& Bernstein, M. S. (2023, October 29). Generative
Agents: Interactive Simulacra of Human Behavior.
UIST 2023 - Proceedings of the 36th Annual ACM
Symposium on User Interface Software and
Technology. https://doi.org/10.1145/3586183.3606763
Piñeiro-Martín, A., García-Mateo, C., Docío-Fernández,
L., & López-Pérez, M. del C. (2023). Ethical
Challenges in the Development of Virtual Assistants
Powered by Large Language Models †. Electronics
(Switzerland), 12(14). https://doi.org/10.3390/
electronics12143170
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S.,
Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2019).
Exploring the Limits of Transfer Learning with a
Unified Text-to-Text Transformer
. http://arxiv.org/
abs/1910.10683
Smith, G. R., Bello, C., Bialic-Murphy, L., Clark, E.,
Delavaux, C. S., Lauriere, C. F. de, Hoogen, J. van den,
Lauber, T., Ma, H., Maynard, D. S., Mirman, M.,
Lidong, M., Rebindaine, D., Reek, J. E., Werden, L. K.,
Wu, Z., Yang, G., Zhao, Q., Zohner, C. M., &
Crowther, T. W. (2024). Ten simple rules for using
large language models in science, version 1.0. PLoS
Computational Biology, 20(1). https://doi.org/10.
1371/journal.pcbi.1011767
Sun, X., & Li, X. (2023). Editorial: Aging and chronic
disease: public health challenge and education reform.
Front Public Health, 16(2), 107–118.
https://doi.org/10.3389/fpubh.2023.1175898