Authors:
Maria Teresa Paratore
;
Eda Marchetti
and
Antonello Calabrò
Affiliation:
Institute of Information Science and Technologies “A. Faedo”, National Research Council of Italy (CNR), Pisa, Italy
Keyword(s):
Access Control, Artificial Intelligence, Large Language Models, Validation, Cybersecurity.
Abstract:
The increasing adoption of generative artificial intelligence, particularly conversational Large Language Models (LLMs), has presented new opportunities for addressing challenges in software development. This paper explores the potential of LLMs in generating eXtensible Access Control Markup Language (XACML) policies. This paper investigates current solutions and strategies for leveraging LLMs to produce verified, secure, compliant access control policies. Specifically, by discussing current methods for enhancing LLM performances in generating structured text, it introduces a pipeline approach that integrates conversational LLMs with syntactic and semantic validators. This approach ensures correctness and reliability of the generated policies. Our proposal is showcased by using real policies and compares various LLMs’ performances (ChatGPT, Claude, Gemini, and LLaMA). Our findings suggest a promising direction for future developments in automated access control policy formulation, br
idging the gap between human intent and machine interpretation.
(More)