be used to detect undesired behaviours during the ex-
ecution (bugs associated to each proof failure) and to
determine errors in the specification linked to these
bugs (here, the lack of lock on an environment vari-
able by the producer agent).
5 CONCLUSION AND
PERSPECTIVES
In this article, we have presented a promising way to
use proof failures in the tuning of MAS. In particular,
we have shown that such a technique highlights bugs
that appears in few executions, because they can de-
pend on the interleaving of the actions of the agents.
That makes these bugs hard to detect and to correct
with standard debugging techniques because they are
hard to reproduce. Of course, research must continue
with other kinds of proof failures to validate the tech-
nique in a more general way. We also aim at devel-
oping a semi-automatic usage of proof failures, be-
cause it seems that standard patterns of proof failure
emerge. In the longer term, we should be able to pro-
pose a taxonomy of proof failures, associating to each
kind of proof failure the potential causes and the po-
tential required patches.
REFERENCES
Dastani, M. and Meyer, J.-J. C. (2010). Specification and
Verification of Multi-agent Systems, chapter Correct-
ness of Multi-Agent Programs: A Hybrid Approach.
Springer.
Dennis, L. A. and Nogueira, P. (2005). What can be learned
from failed proofs of non-theorems. Technical report,
Oxford University Computer Laboratory.
Drogoul, A., Ferrand, N., and Müller, J.-P. (2004). Emer-
gence : l’articulation du local au global. ARAGO,
29:105–135.
Dung N. Lam, K. S. B. (2005). Automated Interpretation of
Agent Behaviour. In AOIS, pages 1–15.
Kaufmann, M. and Moore, J. (2008). Proof
Search Debugging Tools in ACL2.
http://www.cs.utexas.edu/users/moore/publications/-
acl2-papers.html.
Lam, D. N. and Barber, K. S. (2005). Comprehending agent
software. In AAMAS, pages 586–593.
Mermet, B. and Simon, G. (2009). GDT4MAS: an exten-
sion of the GDT model to specify and to verify Multi-
Agent Systems. In et al., D., editor, Proc. of AAMAS
2009, pages 505–512.
Mermet, B. and Simon, G. (2013). A new proof system to
verify gdt agents. In IDC, pages 181–187.
Miles, S., Winikoff, M., Cranefield, S., Nguyen, C., Perini,
A., Tonella, P., Harman, M., and Luck, M. Why test-
ing autonomous agents is hard and what can be done
about it. AOSE Technical Forum 2010 Working Paper.
Nguyen, C., Perini, A., Bernon, C., Pavón, J., and
Thangarajah, J. (2009). Testing in Multi-Agent Sys-
tems. In AOSE, pages 180–190.
Nguyen, C., Perini, A., and Tonella, P. (2008). Ontology-
based test generation for multiagent systems. In AA-
MAS, pages 1315–1320.
Nguyen, C. D., Perini, A., and Tonella, P. (2010). Goal-
oriented testing for MASs. IJAOSE, 4(1):79–109.
Owre, S., Shankar, N., and Rushby, J. (1992). Pvs: A pro-
totype verification system. In CADE 11.
Serrano, E., Gómez-Sanz, J., Botía, J., and Pavón, J. (2009).
ntelligent data analysis applied to debug complex soft-
ware systems. Neurocomputing, 72(13-15):2785–
2795.
Tiryaki, A., Öztuna, S., Dikenelli, O., and Erdur, R. (2006).
SUNIT: A Unit Testing Framework for Test Driven
Development of Multi-Agent Systems. In Agent Ori-
ented Software Engineering (AOSE), pages 156–173.
Vigueras, G. and Botía, J. (2007). Tracking Causality by Vi-
sualization of Multi-Agent Interactions Using Causal-
ity Graphs. In PROMAS, pages 190–204.
Zhang, Z., Thangarajah, J., and Padgham, L. (2009). Model
based testing for agent systems. In AAMAS’09, pages
1333–1334.
ICAART 2019 - 11th International Conference on Agents and Artificial Intelligence
530