
Tianming, L., and Shu, Z. (2023). Prompt Engineer-
ing for Healthcare: Methodologies and Applications.
arXiv.org.
Jin, X., Vinzamuri, B., Venkatapathy, S., Ji, H., and Natara-
jan, P. (2023). Adversarial Robustness for Large Lan-
guage NER models using Disentanglement and Word
Attributions. In Findings of the Association for Com-
putational Linguistics: EMNLP 2023. Association for
Computational Linguistics.
Jung, V. and van der Plas, L. (2024). Understanding the
effects of language-specific class imbalance in multi-
lingual fine-tuning. Findings.
Kang, H., Ni, J., and Yao, H. (2023). Ever: Mitigating Hal-
lucination in Large Language Models through Real-
Time Verification and Rectification. arXiv.org.
Kochanek, M., Cichecki, I., Kaszyca, O., Szydło, D.,
Madej, M., Je¸drzejewski, D., Kazienko, P., and
Koco
´
n, J. (2024). Improving Training Dataset Bal-
ance with ChatGPT Prompt Engineering. Electronics,
13(12):2255.
Li, X., Wang, L., Dong, G., He, K., Zhao, J., Lei, H., Liu,
J., and Xu, W. (2023). Generative Zero-Shot Prompt
Learning for Cross-Domain Slot Filling with Inverse
Prompting. Annual Meeting of the Association for
Computational Linguistics.
Li, Z., Xu, X., Shen, T., Xu, C., Gu, J.-C., Lai, Y., Tao, C.,
and Ma, S. (2024). Leveraging Large Language Mod-
els for NLG Evaluation: Advances and Challenges.
arXiv.org.
Lo, L. S. (2023). The Art and Science of Prompt Engineer-
ing: A New Literacy in the Information Age. Internet
Reference Services Quarterly, 27(4):203–210.
Mansouri, A., Affendey, L., and Mamat, A. (2008). Named
entity recognition approaches. Int J Comp Sci Netw
Sec, 8.
Mesk
´
o, B. (2023). Prompt Engineering as an Important
Emerging Skill for Medical Professionals: Tutorial.
Journal of Medical Internet Research, 25:e50638.
Monajatipoor, M., Yang, J., Stremmel, J., Emami, M., Mo-
haghegh, F., Rouhsedaghat, M., and Chang, K.-W.
(2024). Llms in Biomedicine: A study on clinical
Named Entity Recognition. arXiv.org.
Park, Y.-J., Pillai, A., Deng, J., Guo, E., Gupta, M., Paget,
M., and Naugler, C. (2024). Assessing the research
landscape and clinical utility of large language mod-
els: a scoping review. BMC Medical Informatics and
Decision Making, 24(1).
Rathod, J. D. (2024). Systematic Study of Prompt Engineer-
ing. International Journal for Research in Applied
Science and Engineering Technology, 12(6):597–613.
Reynolds, L. and McDonell, K. (2021). Prompt Program-
ming for Large Language Models: Beyond the Few-
Shot Paradigm. In Extended Abstracts of the 2021
CHI Conference on Human Factors in Computing Sys-
tems, pages 1–7. ACM.
Russe, M. F., Reisert, M., Bamberg, F., and Rau, A. (2024).
Improving the use of LLMs in radiology through
prompt engineering: from precision prompts to zero-
shot learning. R
¨
oFo - Fortschritte auf dem Gebiet der
R
¨
ontgenstrahlen und der bildgebenden Verfahren.
Sellemann, B. (2021). Herausforderungen der Digi-
talisierung in der Pflege. Public Health Forum,
29(3):245–247.
Sonntagbauer, M., Haar, M., and Kluge, S. (2023).
K
¨
unstliche Intelligenz: Wie werden ChatGPT und
andere KI-Anwendungen unseren
¨
arztlichen Alltag
ver
¨
andern? Medizinische Klinik - Intensivmedizin und
Notfallmedizin, 118(5):366–371.
Treder, M. S., Lee, S., and Tsvetanov, K. A. (2024). In-
troduction to Large Language Models (LLMs) for de-
mentia care and research. Frontiers in Dementia, 3.
Wang, J., Shi, E., Yu, S., Wu, Z., Ma, C., Dai, H., Yang, Q.,
Kang, Y., Wu, J., Hu, H., Yue, C., Zhang, H., Liu, Y.,
Pan, Y., Liu, Z., Sun, L., Li, X., Ge, B., Jiang, X., Zhu,
D., Yuan, Y., Shen, D., Liu, T., and Zhang, S. (2023a).
Prompt Engineering for Healthcare: Methodologies
and Applications. arXiv.org.
Wang, S., Sun, X., Li, X., Ouyang, R., Wu, F., Zhang, T.,
Li, J., and Wang, G. (2023b). Gpt-NER: Named Entity
Recognition via Large Language Models. arXiv.org.
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert,
H., Elnashar, A., Spencer-Smith, J., and Schmidt,
D. C. (2023). A Prompt Pattern Catalog to Enhance
Prompt Engineering with ChatGPT. arXiv.org.
Yu, J., Bohnet, B., and Poesio, M. (2020). Named En-
tity Recognition as Dependency Parsing. In Proceed-
ings of the 58th Annual Meeting of the Association for
Computational Linguistics. Association for Computa-
tional Linguistics.
Yuan, Y., Gao, J., and Zhang, Y. (2017). Supervised learn-
ing for robust term extraction. In 2017 International
Conference on Asian Language Processing (IALP),
volume 1031, pages 302–305. IEEE.
Zernikow, J., Grassow, L., Gr
¨
oschel, J., Henrion, P., Wet-
zel, P. J., and Spethmann, S. (2023). Anwendung von
”large language models” in der Klinik. Die Innere
Medizin, 64(11):1058–1064.
Zhang, J., Li, Z., Das, K., Malin, B. A., and Kumar,
S. (2023). Sac3: Reliable Hallucination Detection
in Black-Box Language Models via Semantic-aware
Cross-check Consistency. Conference on Empirical
Methods in Natural Language Processing.
Zhou, C., He, J., Ma, X., Berg-Kirkpatrick, T., and Neubig,
G. (2022). Prompt Consistency for Zero-Shot Task
Generalization. Conference on Empirical Methods in
Natural Language Processing.
Zhou, G. and Su, J. (2001). Named entity recognition us-
ing an HMM-based chunk tagger. In Proceedings of
the 40th Annual Meeting on Association for Compu-
tational Linguistics - ACL ’02, page 473. Association
for Computational Linguistics.
HEALTHINF 2025 - 18th International Conference on Health Informatics
128