Authors:
Quang-Anh Nguyen
1
;
Thu-Trang Pham
1
;
Thi-Hai-yen Vuong
1
;
Van-Giang Trinh
2
and
Nguyen Ha Thanh
3
;
4
Affiliations:
1
VNU University of Engineering and Technology, Hanoi, Vietnam
;
2
Inria Saclay, EP Lifeware, Palaiseau, France
;
3
Center for Juris-Informatics, ROIS-DS, Tokyo, Japan
;
4
Research and Development Center for Large Language Models, NII, Tokyo, Japan
Keyword(s):
LLM, ASP, Explainability, Misleading Information Detection.
Abstract:
Answer Set Programming (ASP) is traditionally constrained by predefined rule sets and domains, which limits the scalability of ASP systems. While Large Language Models (LLMs) exhibit remarkable capabilities in linguistic comprehension and information representation, they are limited in logical reasoning which is the notable strength of ASP. Hence, there is growing research interest in integrating LLMs with ASP to leverage these abilities. Although many models combining LLMs and ASP have demonstrated competitive results, issues related to misleading input information which directly affect the incorrect solutions produced by these models have not been adequately addressed. In this study, we propose a method integrating LLMs with explainable ASP to trace back and identify misleading segments in the provided input. Experiments conducted on the CLUTRR dataset show promising results, laying a foundation for future research on error correction to enhance the accuracy of question-answering m
odels. Furthermore, we discuss current challenges, potential advancements, and issues related to the utilization of hybrid AI systems.
(More)