
Figure 10: An outline of the final workflow.
ACKNOWLEDGEMENTS
This work was partially supported by the project
RESTART (PE00000001), and the project SER-
ICS (PE00000014) under the NRRP MUR program
funded by the EU - NextGenerationEU.
REFERENCES
Brodie, C., Karat, C.-M., and Karat, J. (2006). An empiri-
cal study of natural language parsing of privacy policy
rules using the sparcle policy workbench. In Sympo-
sium On Usable Privacy and Security.
Buscemi, A. (2023). A comparative study of code gener-
ation using chatgpt 3.5 across 10 programming lan-
guages. ArXiv, abs/2308.04477.
Coyne, E. and Weil, T. R. (2013). ABAC and RBAC: scal-
able, flexible, and auditable access management. IT
Prof., 15(3):14–16.
Goknil, A., Gelderblom, F. B., Tverdal, S., Tokas, S., and
Song, H. (2024). Privacy policy analysis through
prompt engineering for llms. ArXiv, abs/2409.14879.
Hassanin, M. and Moustafa, N. (2024). A comprehen-
sive overview of large language models (llms) for cy-
ber defences: Opportunities and directions. ArXiv,
abs/2405.14487.
Jin, Y., Sorley, T., O’Brien, S., and Reyes, J. (2014). Imple-
mentation of XACML role-based access control spec-
ification. Int. J. Comput. Their Appl., 21(1):62–69.
Kumar, V., Srivastava, P., Dwivedi, A., Budhiraja, I., Ghosh,
D., Goyal, V., and Arora, R. (2023). Large-language-
models (llm)-based ai chatbots: Architecture, in-depth
analysis and their performance evaluation. In Interna-
tional Conference on Recent Trends in Image Process-
ing and Pattern Recognition.
Liu, Y., Li, D., Wang, K., Xiong, Z., Shi, F., Wang, J., Li,
B., and Hang, B. (2024). Are llms good at structured
outputs? a benchmark for evaluating structured output
capabilities in llms. Inf. Process. Manag., 61:103809.
Michael, K., Abbas, R., and Roussos, G. (2023). Ai in
cybersecurity: The paradox. IEEE Transactions on
Technology and Society.
Narouei, M., Takabi, H., and Nielsen, R. D. (2020). Au-
tomatic extraction of access control policies from nat-
ural language documents. IEEE Transactions on De-
pendable and Secure Computing, 17:506–517.
Rubio-Medrano, C. E., Kotak, A., Wang, W., and Sohr, K.
(2024). Pairing human and artificial intelligence: En-
forcing access control policies with llms and formal
specifications. Proceedings of the 29th ACM Sympo-
sium on Access Control Models and Technologies.
Siam, M. K., Gu, H., and Cheng, J. Q. (2024). Programming
with ai: Evaluating chatgpt, gemini, alphacode, and
github copilot for programmers.
Slankas, J., Xiao, X., Williams, L. A., and Xie, T. (2014).
Relation extraction for inferring access control rules
from natural language artifacts. Proceedings of the
30th Annual Computer Security Applications Confer-
ence.
Subramaniam, P. and Krishnan, S. (2024). Intent-based ac-
cess control: Using llms to intelligently manage ac-
cess control. arXiv preprint arXiv:2402.07332.
Vijayan, A. (2023). A prompt engineering approach for
structured data extraction from unstructured text us-
ing conversational llms. Proceedings of the 2023 6th
International Conference on Algorithms, Computing
and Artificial Intelligence.
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C.,
Gilbert, H., Elnashar, A., Spencer-Smith, J., and
Schmidt, D. C. (2023). A prompt pattern catalog
to enhance prompt engineering with chatgpt. ArXiv,
abs/2302.11382.
Zhong, L. and Wang, Z. (2023). Can llm replace stack over-
flow? a study on robustness and reliability of large
language model code generation. In AAAI Conference
on Artificial Intelligence.
SAMPLE AUTHORIZATION
REQUIREMENTS AND REQUESTS
Policy 1 (P1): Authorization Requirements: The
resource http://localhost:8280/services/echo/ can be
read only by users belonging to the administrators
group. Any other operation or all requests to access
MODELSWARD 2025 - 13th International Conference on Model-Based Software and Systems Engineering
94