
ASP solver and also its explainable extension are de-
fined depending on specific input data and expected
output. Therefore, for different datasets with seper-
ated targets, compatible rule sets have to be recon-
structed. In general, this work still demands on the
control of human and has not yet been automated.
Perspectives. Further research is needed to address
the above limitations comprehensively varying from
applying optimization techniques to handle the us-
age of computational resources to building generative
rules system. Moreover, based on the proposal of this
study, new development in solving logical reasoning
tasks in natural language can be conversed. One pos-
sible work is to eliminate errors in verbal context with
the advancements of LLMs and ASP.
6 CONCLUSION
This study introduces an innovative methodology
for detecting misleading information by integrating
Large Language Models (LLMs) with explainable
Answer Set Programming (ASP). The synergy be-
tween the contextual understanding capabilities of
LLMs and the reasoning and explanatory potential of
explainable ASP has demonstrated the effectiveness
in identifying misleading information that can cause
confusion and significantly affect the accuracy of re-
sponses. It makes a substantial contribution to the
advancement and refinement of reliable AI question-
answering systems.
REFERENCES
Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I.,
Aleman, F. L., Almeida, D., Altenschmidt, J., Altman,
S., Anadkat, S., et al. (2023). GPT-4 technical report.
arXiv preprint arXiv:2303.08774.
Bauer, J. J., Eiter, T., Ruiz, N. H., and Oetsch, J. (2023).
Neuro-symbolic visual graph question answering with
LLMs for language parsing. In Proc. of TAASP 2023.
Brown, T. B. (2020). Language models are few-shot learn-
ers. arXiv preprint arXiv:2005.14165.
Cabalar, P., Fandinno, J., and Mu
˜
niz, B. (2020). A system
for explainable answer set programming. In Proc. of
ICLP, pages 124–136.
Calimeri, F., Ianni, G., and Ricca, F. (2014). The third open
answer set programming competition. Theory Pract.
Log. Program., 14(1):117–135.
Eiter, T., Ianni, G., and Krennwallner, T. (2009). Answer set
programming: A primer. Springer.
Erdem, E., Gelfond, M., and Leone, N. (2016). Applica-
tions of answer set programming. AI Mag., 37(3):53–
68.
Gebser, M., Kaminski, R., Kaufmann, B., and Schaub, T.
(2019). Multi-shot ASP solving with clingo. Theory
Pract. Log. Program., 19(1):27–82.
Gelfond, M. and Lifschitz, V. (1988). The stable model se-
mantics for logic programming. In Proc. of ICLP/SLP,
pages 1070–1080. Cambridge, MA.
Ishay, A., Yang, Z., and Lee, J. (2023). Leveraging large
language models to generate answer set programs.
arXiv preprint arXiv:2307.07699.
Kheiri, K. and Karimi, H. (2023). SentimentGPT: Exploit-
ing GPT for advanced sentiment analysis and its de-
parture from current machine learning. arXiv preprint
arXiv:2307.10234.
Liga, D. and Robaldo, L. (2023). Fine-tuning GPT-3 for
legal rule classification. Comput. Law Secur. Rev.,
51:105864.
Marriott, K. and Stuckey, P. J. (1998). Programming with
constraints: an introduction. MIT press.
McCarthy, J. (1959). Programs with common sense.
Nguyen, H.-T., Fungwacharakorn, W., and Satoh, K.
(2023a). Enhancing logical reasoning in large lan-
guage models to facilitate legal applications. arXiv
preprint arXiv:2311.13095.
Nguyen, H.-T., Goebel, R., Toni, F., Stathis, K., and Satoh,
K. (2023b). How well do SOTA legal reasoning
models support abductive reasoning? arXiv preprint
arXiv:2304.06912.
Phi, M., Nguyen, H., Bach, N. X., Tran, V. D., Nguyen,
M. L., and Phuong, T. M. (2020). Answering legal
questions by learning neural attentive text representa-
tion. In Proc. of COLING, pages 988–998. Interna-
tional Committee on Computational Linguistics.
Rajasekharan, A., Zeng, Y., Padalkar, P., and Gupta, G.
(2023). Reliable natural language understanding with
large language models and answer set programming.
arXiv preprint arXiv:2302.03780.
Sinha, K., Sodhani, S., Dong, J., Pineau, J., and Hamilton,
W. L. (2019). CLUTRR: A diagnostic benchmark for
inductive reasoning from text. In Proc. of EMNLP-
IJCNLP, pages 4505–4514. Association for Compu-
tational Linguistics.
Trinh, G. V., Benhamou, B., Pastva, S., and Soliman,
S. (2024). Scalable enumeration of trap spaces in
Boolean networks via answer set programming. In
Proc. of AAAI, pages 10714–10722. AAAI Press.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L., and Polosukhin, I.
(2017). Attention is all you need. In Proc. of NeurIPS,
pages 5998–6008.
Yang, Z., Ishay, A., and Lee, J. (2023). Neurasp: Embracing
neural networks into answer set programming. arXiv
preprint arXiv:2307.07700.
Zin, M. M., Nguyen, H., Satoh, K., Sugawara, S., and
Nishino, F. (2023). Information extraction from
lengthy legal contracts: Leveraging query-based sum-
marization and GPT-3.5. In Proc. of JURIX, pages
177–186. IOS Press.
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
1334