corpus: A resource for disease name recognition and
concept normalization. Journal of biomedical infor-
matics, 47:1–10.
Elsahar, H. (2017). T-Rex : A Large Scale Alignment of
Natural Language with Knowledge Base Triples [NIF
SAMPLE].
Emelin, D., Bonadiman, D., Alqahtani, S., Zhang, Y., and
Mansour, S. (2022). Injecting domain knowledge
in language models for task-oriented dialogue sys-
tems. In Proceedings of the 2022 Conference on
Empirical Methods in Natural Language Processing,
pages 11962–11974. Association for Computational
Linguistics.
Fichtl, A. (2024). Evaluating adapter-based knowledge-
enhanced language models in the biomedical domain.
Master’s thesis, Technical University of Munich, Mu-
nich, Germany.
Gu, Y., Tinn, R., Cheng, H., Lucas, M. R., Usuyama,
N., Liu, X., Naumann, T., Gao, J., and Poon,
H. (2020). Domain-specific language model pre-
training for biomedical natural language process-
ing. ACM Transactions on Computing for Healthcare
(HEALTH), 3:1 – 23.
Guo, Q. and Guo, Y. (2022). Lexicon enhanced chinese
named entity recognition with pointer network. Neu-
ral Computing and Applications.
Han, W., Pang, B., and Wu, Y. N. (2021). Robust trans-
fer learning with pretrained language models through
adapters. ArXiv, abs/2108.02340.
He, J., Zhou, C., Ma, X., Berg-Kirkpatrick, T., and Neu-
big, G. (2021a). Towards a unified view of parameter-
efficient transfer learning. ArXiv, abs/2110.04366.
He, R., Liu, L., Ye, H., Tan, Q., Ding, B., Cheng, L., Low,
J.-W., Bing, L., and Si, L. (2021b). On the effective-
ness of adapter-based tuning for pretrained language
model adaptation.
He, Y., Zhu, Z., Zhang, Y., Chen, Q., and Caverlee, J.
(2020). Infusing Disease Knowledge into BERT for
Health Question Answering, Medical Inference and
Disease Name Recognition. In Proceedings of the
2020 Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 4604–4614,
Online. Association for Computational Linguistics.
Hogan, A., Blomqvist, E., Cochez, M., d’Amato, C.,
de Melo, G., Guti
´
errez, C., Kirrane, S., Gayo, J.
E. L., Navigli, R., Neumaier, S., Ngomo, A.-C. N.,
Polleres, A., Rashid, S. M., Rula, A., Schmelzeisen,
L., Sequeda, J., Staab, S., and Zimmermann, A.
(2020). Knowledge graphs. ACM Computing Surveys
(CSUR), 54:1 – 37.
Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B.,
de Laroussilhe, Q., Gesmundo, A., Attariyan, M., and
Gelly, S. (2019). Parameter-efficient transfer learn-
ing for nlp. In International Conference on Machine
Learning.
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang,
S., Wang, L., and Chen, W. (2022). LoRA: Low-rank
adaptation of large language models. In International
Conference on Learning Representations.
Hu, L., Liu, Z., Zhao, Z., Hou, L., Nie, L., and Li, J. (2023).
A survey of knowledge enhanced pre-trained language
models.
Huang, L., Yu, W., Ma, W., Zhong, W., Feng, Z., Wang,
H., Chen, Q., Peng, W., Feng, X., Qin, B., and Liu, T.
(2023). A Survey on Hallucination in Large Language
Models: Principles, Taxonomy, Challenges, and Open
Questions. arXiv e-prints, page arXiv:2311.05232.
Hung, C.-C., Lange, L., and Str
¨
otgen, J. (2023). TADA:
Efficient task-agnostic domain adaptation for trans-
formers. In Findings of the Association for Com-
putational Linguistics: ACL 2023, pages 487–503,
Toronto, Canada. Association for Computational Lin-
guistics.
Hung, C.-C., Lauscher, A., Ponzetto, S., and Glava
ˇ
s, G.
(2022). DS-TOD: Efficient domain specialization for
task-oriented dialog. In Findings of the Association
for Computational Linguistics: ACL 2022, pages 891–
904. Association for Computational Linguistics.
Ji, S., Pan, S., Cambria, E., Marttinen, P., and Yu, P. S.
(2020). A survey on knowledge graphs: Representa-
tion, acquisition, and applications. IEEE Transactions
on Neural Networks and Learning Systems, 33:494–
514.
Jin, Q., Dhingra, B., Liu, Z., Cohen, W., and Lu, X. (2019).
PubMedQA: A dataset for biomedical research ques-
tion answering. In Proceedings of the 2019 Confer-
ence on EMNLP-IJCNLP, pages 2567–2577, Hong
Kong, China. Association for Computational Linguis-
tics.
Kær Jørgensen, R., Hartmann, M., Dai, X., and Elliott,
D. (2021). mDAPT: Multilingual domain adaptive
pretraining in a single model. In Findings of the
Association for Computational Linguistics: EMNLP
2021, pages 3404–3418. Association for Computa-
tional Linguistics.
Khadir, A. C., Aliane, H., and Guessoum, A. (2021). On-
tology learning: Grand tour and challenges. Computer
Science Review, 39:100339.
Kitchenham, B., Pearl Brereton, O., Budgen, D., Turner,
M., Bailey, J., and Linkman, S. (2009). Systematic lit-
erature reviews in software engineering – a systematic
literature review. Information and Software Technol-
ogy, 51(1):7–15. Special Section - Most Cited Articles
in 2002 and Regular Research Papers.
Lai, T. M., Zhai, C., and Ji, H. (2023). Keblm: Knowledge-
enhanced biomedical language models. Journal of
Biomedical Informatics, 143:104392.
Lauscher, A., Majewska, O., Ribeiro, L. F. R., Gurevych,
I., Rozanov, N., and Glava
ˇ
s, G. (2020). Common
sense or world knowledge? investigating adapter-
based knowledge injection into pretrained transform-
ers. In Proceedings of Deep Learning Inside Out
(DeeLIO), pages 43–49. Association for Computa-
tional Linguistics.
Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H.,
and Kang, J. (2019). BioBERT: a pre-trained biomed-
ical language representation model for biomedical text
mining. Bioinformatics, 36(4):1234–1240.
Li, B., Hwang, D., Huo, Z., Bai, J., Prakash, G., Sainath,
T. N., Chai Sim, K., Zhang, Y., Han, W., Strohman, T.,
and Beaufays, F. (2023). Efficient domain adaptation
for speech foundation models. In ICASSP 2023 - 2023
IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP), pages 1–5.
KEOD 2024 - 16th International Conference on Knowledge Engineering and Ontology Development
104