
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa,
Y. (2022). Large language models are zero-shot rea-
soners. ArXiv, abs/2205.11916.
Komal, S., Zakeya, N., Raphael, R., Harit, A., Moham-
madreza, R., Marin, L., Larisa, S., and Ian, W. (2023).
Adarma auto-detection and auto-remediation of mi-
croservice anomalies by leveraging large language
models. In Proceedings of the 33rd Annual Interna-
tional Conference on Computer Science and Software
Engineering, CASCON ’23, page 200–205, USA.
IBM Corp.
Kratzke, N. (2023). Cloud-native Computing: Software
Engineering von Diensten und Applikationen f
¨
ur die
Cloud. Carl Hanser Verlag GmbH Co KG.
Lanciano, G., Stein, M., Hilt, V., Cucinotta, T., et al.
(2023). Analyzing declarative deployment code with
large language models. CLOSER, 2023:289–296.
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin,
V., Goyal, N., K
¨
uttler, H., Lewis, M., Yih,
W.-t., Rockt
¨
aschel, T., Riedel, S., and Kiela,
D. (2020). Retrieval-augmented generation for
knowledge-intensive nlp tasks. In Proceedings of the
34th International Conference on Neural Information
Processing Systems, NIPS’20, Red Hook, NY, USA.
Curran Associates Inc.
Lin, J., Tang, J., Tang, H., Yang, S., Dang, X., and Han,
S. (2023). Awq: Activation-aware weight quantiza-
tion for llm compression and acceleration. ArXiv,
abs/2306.00978.
Liu, J., Liu, A., Lu, X., Welleck, S., West, P., Bras, R. L.,
Choi, Y., and Hajishirzi, H. (2021). Generated knowl-
edge prompting for commonsense reasoning. In An-
nual Meeting of the Association for Computational
Linguistics.
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig,
G. (2023). Pre-train, prompt, and predict: A system-
atic survey of prompting methods in natural language
processing. ACM Journals, Vol 55, Issue 9.
Long, J. (2023). Large language model guided tree-of-
thought. ArXiv, abs/2305.08291.
Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S.,
Usman, M., Barnes, N., and Mian, A. (2023). A com-
prehensive overview of large language models. arXiv
preprint arXiv:2307.06435.
Paranjape, B., Lundberg, S. M., Singh, S., Hajishirzi, H.,
Zettlemoyer, L., and Ribeiro, M. T. (2023). Art: Auto-
matic multi-step reasoning and tool-use for large lan-
guage models. ArXiv, abs/2303.09014.
Petroni, F., Rockt
¨
aschel, T., Lewis, P., Bakhtin, A., Wu,
Y., Miller, A. H., and Riedel, S. (2019). Lan-
guage models as knowledge bases? arXiv preprint
arXiv:1909.01066.
Quint, P.-C. and Kratzke, N. (2019). Towards a lightweight
multi-cloud dsl for elastic and transferable cloud-
native applications.
Sultan, S., Ahmad, I., and Dimitriou, T. (2019). Container
security: Issues, challenges, and the road ahead. IEEE
access, 7:52976–52996.
Topsakal, O. and Akinci, T. C. (2023). Creating large
language model applications utilizing langchain: A
primer on developing llm apps fast. International
Conference on Applied Engineering and Natural Sci-
ences.
Tosatto, A., Ruiu, P., and Attanasio, A. (2015). Container-
based orchestration in cloud: state of the art and chal-
lenges. In 2015 Ninth international conference on
complex, intelligent, and software intensive systems,
pages 70–75. IEEE.
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux,
M.-A., Lacroix, T., Rozi
`
ere, B., Goyal, N., Hambro,
E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E.,
and Lample, G. (2023a). Llama: Open and efficient
foundation language models. ArXiv, abs/2302.13971.
Touvron, H., Martin, L., ..., and Scialom, T. (2023b).
Llama 2: Open foundation and fine-tuned chat mod-
els. ArXiv, abs/2307.09288.
Wang, X., Wei, J., Schuurmans, D., Le, Q., hsin Chi, E. H.,
and Zhou, D. (2022). Self-consistency improves chain
of thought reasoning in language models. ArXiv,
abs/2203.11171.
Wei, J., Bosma, M., Zhao, V., Guu, K., Yu, A. W., Lester,
B., Du, N., Dai, A. M., and Le, Q. V. (2021). Fine-
tuned language models are zero-shot learners. ArXiv,
abs/2109.01652.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., hsin Chi,
E. H., Xia, F., Le, Q., and Zhou, D. (2022). Chain of
thought prompting elicits reasoning in large language
models. ArXiv, abs/2201.11903.
Xu, Y., Chen, Y., Zhang, X., Lin, X., Hu, P., Ma, Y., Lu, S.,
Du, W., Mao, Z. M., Zhai, E., et al. (2023). Cloudeval-
yaml: A realistic and scalable benchmark for cloud
configuration generation.
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao,
Y., and Narasimhan, K. (2023). Tree of thoughts: De-
liberate problem solving with large language models.
ArXiv, abs/2305.10601.
Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan,
K., and Cao, Y. (2022). React: Synergizing rea-
soning and acting in language models. ArXiv,
abs/2210.03629.
Ye, J., Chen, X., Xu, N., ..., and Huang, X. (2023). A com-
prehensive capability analysis of gpt-3 and gpt-3.5 se-
ries models. ArXiv, abs/2303.10420.
Zhao, X., Lu, J., Deng, C., Zheng, C., Wang, J., Chowd-
hury, T., Yun, L., Cui, H., Xuchao, Z., Zhao, T., et al.
(2023). Domain specialization as the key to make
large language models disruptive: A comprehensive
survey. arXiv preprint arXiv:2305.18703.
Don’t Train, Just Prompt: Towards a Prompt Engineering Approach for a More Generative Container Orchestration Management
255