
Catlett, J. (1991). Mega induction: A test flight. In Machine
Learning Proceedings 1991, pages 596–599. Elsevier.
Cohen, W. W. (1995). Fast effective rule induction. In Ma-
chine learning proceedings 1995, pages 115–123. El-
sevier.
Doutre, S., Duchatelle, T., and Lagasquie-Schiex, M.-C.
(2023). Visual explanations for defence in abstract
argumentation. In International Conference on Au-
tonomous Agents and Multiagent Systems (AAMAS),
pages 2346–2348. ACM.
Dung, P. M. (1995). On the acceptability of arguments and
its fundamental role in nonmonotonic reasoning, logic
programming and n-person games. Artificial intelli-
gence, 77(2):321–357.
Fan, X. and Toni, F. (2014). On computing explanations in
abstract argumentation. In ECAI 2014, pages 1005–
1006. IOS Press.
F
¨
urnkranz, J. and Widmer, G. (1994). Incremental reduced
error pruning. In Machine learning proceedings 1994,
pages 70–77. Elsevier.
Goodman, B. and Flaxman, S. (2017). European union reg-
ulations on algorithmic decision-making and a “right
to explanation”. AI magazine, 38(3):50–57.
Hein, D., Udluft, S., and Runkler, T. A. (2018). Inter-
pretable policies for reinforcement learning by genetic
programming. Engineering Applications of Artificial
Intelligence, 76:158–169.
Janosi, A., Steinbrunn, W., Pfisterer, M., and Detrano, R.
(1988). Heart Disease. UCI Machine Learning Repos-
itory. DOI: https://doi.org/10.24432/C52P4X.
Lewis, D. (2013). Counterfactuals. John Wiley & Sons.
Liao, B., Anderson, M., and Anderson, S. L. (2021). Rep-
resentation, justification, and explanation in a value-
driven agent: an argumentation-based approach. AI
and Ethics, 1(1):5–19.
Liao, B., Pardo, P., Slavkovik, M., and van der Torre,
L. (2023). The jiminy advisor: Moral agreements
among stakeholders based on norms and argumen-
tation. Journal of Artificial Intelligence Research,
77:737–792.
Liao, B., Slavkovik, M., and van der Torre, L. (2019).
Building jiminy cricket: An architecture for moral
agreements among stakeholders. In Proceedings of
the 2019 AAAI/ACM Conference on AI, Ethics, and
Society, pages 147–153.
Liu, G., Schulte, O., Zhu, W., and Li, Q. (2018). To-
ward interpretable deep reinforcement learning with
linear model u-trees. In Joint European Conference
on Machine Learning and Knowledge Discovery in
Databases, pages 414–429. Springer.
Liu, J. J. and Kwok, J. T.-Y. (2000). An extended genetic
rule induction algorithm. In Proceedings of the 2000
Congress on Evolutionary Computation. CEC00 (Cat.
No. 00TH8512), volume 1, pages 458–463. IEEE.
McBurney, P. and Parsons, S. (2004). Locutions for ar-
gumentation in agent interaction protocols. In Inter-
national Workshop on Agent Communication, pages
209–225. Springer.
Nofal, S., Atkinson, K., and Dunne, P. E. (2021). Com-
puting grounded extensions of abstract argumentation
frameworks. The Computer Journal, 64(1):54–63.
Puiutta, E. and Veith, E. M. (2020). Explainable rein-
forcement learning: A survey. In International cross-
domain conference for machine learning and knowl-
edge extraction, pages 77–95. Springer.
Quinlan, J. R. (1986). Induction of decision trees. Machine
learning, 1:81–106.
Quinlan, J. R. (1987). Generating production rules from
decision trees. In ijcai, volume 87, pages 304–307.
Citeseer.
Quinlan, J. R. (2014). C4. 5: programs for machine learn-
ing. Elsevier.
Rizzo, L. and Longo, L. (2018). A qualitative investigation
of the degree of explainability of defeasible argumen-
tation and non-monotonic fuzzy reasoning.
Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A., et al.
(2021). On the design of psyke: a platform for
symbolic knowledge extraction. In CEUR WORK-
SHOP PROCEEDINGS, volume 2963, pages 29–48.
Sun SITE Central Europe, RWTH Aachen University.
Selbst, A. and Powles, J. (2018). “meaningful information”
and the right to explanation. In conference on fair-
ness, accountability and transparency, pages 48–48.
PMLR.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Er-
han, D., Goodfellow, I., and Fergus, R. (2013). In-
triguing properties of neural networks. arXiv preprint
arXiv:1312.6199.
Tan, S., Caruana, R., Hooker, G., and Lou, Y. (2018).
Distill-and-compare: Auditing black-box models us-
ing transparent model distillation. In Proceedings of
the 2018 AAAI/ACM Conference on AI, Ethics, and
Society, pages 303–310.
Venturini, G. (1993). Sia: a supervised inductive algorithm
with genetic search for learning attributes based con-
cepts. In European conference on machine learning,
pages 280–296. Springer.
Verma, A., Murali, V., Singh, R., Kohli, P., and Chaudhuri,
S. (2018). Programmatically interpretable reinforce-
ment learning. In International Conference on Ma-
chine Learning, pages 5045–5054. PMLR.
Wachter, S., Mittelstadt, B., and Russell, C. (2017). Coun-
terfactual explanations without opening the black box:
Automated decisions and the gdpr. Harv. JL & Tech.,
31:841.
Weiss, S. M. and Indurkhya, N. (1991). Reduced complex-
ity rule induction. In IJCAI, pages 678–684.
Wiering, M. A. and Van Otterlo, M. (2012). Reinforce-
ment learning. Adaptation, learning, and optimiza-
tion, 12(3):729.
Wolberg, W., Mangasarian, O., Street, N., and Street,
W. (1995). Breast Cancer Wisconsin (Diagnos-
tic). UCI Machine Learning Repository. DOI:
https://doi.org/10.24432/C5DW2B.
Yu, L., Chen, D., Qiao, L., Shen, Y., and van der Torre, L.
(2021). A principle-based analysis of abstract agent
argumentation semantics. In Proceedings of the Inter-
national Conference on Principles of Knowledge Rep-
resentation and Reasoning, volume 18, pages 629–
639.
Zahavy, T., Ben-Zrihem, N., and Mannor, S. (2016). Gray-
ing the black box: Understanding dqns. In Interna-
tional Conference on Machine Learning, pages 1899–
1908. PMLR.
An A-Star Algorithm for Argumentative Rule Extraction
101