
patient education: a case study in radiology. Academic
Radiology.
Latouche, G. L., Marcotte, L., and Swanson, B. (2023).
Generating video game scripts with style. In Proceed-
ings of the 5th Workshop on NLP for Conversational
AI (NLP4ConvAI 2023), pages 129–139.
Li, B., Mellou, K., Zhang, B., Pathuri, J., and Menache,
I. (2023a). Large language models for supply chain
optimization. arXiv preprint arXiv:2307.03875.
Li, H., Chen, Y., Luo, J., Kang, Y., Zhang, X., Hu, Q., Chan,
C., and Song, Y. (2023b). Privacy in large language
models: Attacks, defenses and future directions. arXiv
preprint arXiv:2310.10383.
Liu, T. and Low, B. K. H. (2023). Goat: Fine-tuned llama
outperforms gpt-4 on arithmetic tasks. arXiv preprint
arXiv:2305.14201.
Liu, Y., Tao, S., Meng, W., Wang, J., Ma, W., Zhao,
Y., Chen, Y., Yang, H., Jiang, Y., and Chen, X.
(2023). Logprompt: Prompt engineering towards
zero-shot and interpretable log analysis. arXiv
preprint arXiv:2308.07610.
Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J. R.,
Bethard, S., and McClosky, D. (2014). The stanford
corenlp natural language processing toolkit. In Pro-
ceedings of 52nd annual meeting of the association
for computational linguistics: system demonstrations,
pages 55–60.
McKee, F. and Noever, D. (2023). Chatbots in a honeypot
world. arXiv preprint arXiv:2301.03771.
Naleszkiewicz, K. (2023). Harnessing llms in enterprise
risk management: A new frontier in decision-making.
Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S.,
Usman, M., Barnes, N., and Mian, A. (2023). A com-
prehensive overview of large language models. arXiv
preprint arXiv:2307.06435.
Neupane, S., Fernandez, I. A., Mittal, S., and Rahimi, S.
(2023). Impacts and risk of generative ai technology
on cyber defense. arXiv preprint arXiv:2306.13033.
Omar, M. and Shiaeles, S. (2023). Vuldetect: A novel tech-
nique for detecting software vulnerabilities using lan-
guage models. In 2023 IEEE International Confer-
ence on Cyber Security and Resilience (CSR), pages
105–110. IEEE.
Pa Pa, Y. M., Tanizaki, S., Kou, T., Van Eeten, M., Yosh-
ioka, K., and Matsumoto, T. (2023). An attacker’s
dream? exploring the capabilities of chatgpt for de-
veloping malware. In Proceedings of the 16th Cyber
Security Experimentation and Test Workshop, pages
10–18.
Pandya, K. and Holia, M. (2023). Automating cus-
tomer service using langchain: Building custom open-
source gpt chatbot for organizations. arXiv preprint
arXiv:2310.05421.
Pearce, H., Tan, B., Ahmad, B., Karri, R., and Dolan-Gavitt,
B. (2023). Examining zero-shot vulnerability repair
with large language models. In 2023 IEEE Sympo-
sium on Security and Privacy (SP), pages 2339–2356.
IEEE.
Peng, B., Li, C., He, P., Galley, M., and Gao, J.
(2023). Instruction tuning with gpt-4. arXiv preprint
arXiv:2304.03277.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S.,
Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2020).
Exploring the limits of transfer learning with a uni-
fied text-to-text transformer. The Journal of Machine
Learning Research, 21(1):5485–5551.
Ranade, P., Piplai, A., Joshi, A., and Finin, T. (2021). Cy-
bert: Contextualized embeddings for the cybersecu-
rity domain. In 2021 IEEE International Conference
on Big Data (Big Data), pages 3334–3342. IEEE.
Rando, J., Perez-Cruz, F., and Hitaj, B. (2023). Pass-
gpt: Password modeling and (guided) genera-
tion with large language models. arXiv preprint
arXiv:2306.01545.
Rao, A., Kim, J., Kamineni, M., Pang, M., Lie, W., and
Succi, M. D. (2023). Evaluating chatgpt as an ad-
junct for radiologic decision-making. medRxiv, pages
2023–02.
Rasmy, L., Xiang, Y., Xie, Z., Tao, C., and Zhi, D. (2021).
Med-bert: pretrained contextualized embeddings on
large-scale structured electronic health records for dis-
ease prediction. NPJ digital medicine, 4(1):86.
Roy, S. S., Naragam, K. V., and Nilizadeh, S. (2023). Gen-
erating phishing attacks using chatgpt. arXiv preprint
arXiv:2305.05133.
Saha Roy, S., Vamsi Naragam, K., and Nilizadeh, S. (2023).
Generating phishing attacks using chatgpt. arXiv e-
prints, pages arXiv–2305.
Sakaoglu, S. (2023). Kartal: Web application vulnerability
hunting using large language models: Novel method
for detecting logical vulnerabilities in web applica-
tions with finetuned large language models.
Salewski, L., Alaniz, S., Rio-Torto, I., Schulz, E., and
Akata, Z. (2023). In-context impersonation reveals
large language models’ strengths and biases. arXiv
preprint arXiv:2305.14930.
Sandoval, G., Pearce, H., Nys, T., Karri, R., Garg, S., and
Dolan-Gavitt, B. (2023). Lost at c: A user study on
the security implications of large language model code
assistants. arXiv preprint arXiv:2208.09727.
Sannihith Lingutla, S. (2023). Enhancing password secu-
rity: advancements in password segmentation tech-
nique for high-quality honeywords.
Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper,
J., and Catanzaro, B. (2019). Megatron-lm: Training
multi-billion parameter language models using model
parallelism. arXiv preprint arXiv:1909.08053.
Shoham, O. B. and Rappoport, N. (2023). Cpllm: Clinical
prediction with large language models. arXiv preprint
arXiv:2309.11295.
Sladi
´
c, M., Valeros, V., Catania, C., and Garcia, S. (2023).
Llm in the shell: Generative honeypots. arXiv preprint
arXiv:2309.00155.
Soltan, S., Ananthakrishnan, S., FitzGerald, J., Gupta, R.,
Hamza, W., Khan, H., Peris, C., Rawls, S., Rosen-
baum, A., Rumshisky, A., et al. (2022). Alexatm 20b:
Few-shot learning using a large-scale multilingual
seq2seq model. arXiv preprint arXiv:2208.01448.
Large Language Models in Cybersecurity: State-of-the-Art
109