
Future research will include modelling the dy-
namics of the trustworthiness evaluation, in particular
the mechanisms of the influence of external signals,
like experience or communication with other agents,
on the changes of the evaluation of potential trustees
trustworthiness. This will be a basis of the design of
the decision making mechanism (who to trust?) and
the experimental verification of our model.
REFERENCES
Alexy, R. (2003). On balancing and subsumption. a struc-
tural comparison. Ratio Juris, 16(4):433–449.
Barki, H., Robert, J., and Dulipovici, A. (2015). Reconcep-
tualizing trust: A non-linear boolean model. Informa-
tion & Management, 52(4):483–495.
Castelfranchi, C. and Falcone, R. (2010). Trust theory : a
socio-cognitive and computational model. John Wiley
& Sons Ltd., UK.
Chen, S.-H., Chie, B.-T., and Zhang, T. (2015). Network-
based trust games: An agent-based model. Journal of
Artificial Societies and Social Simulation, 18(3):5.
Delijoo, A. (2021). Computational trust models for collab-
orative network orchestration. PhD thesis, University
of AMsterdam.
Dworkin, R. (1978). Taking Rights Seriously. New Impres-
sion with a Reply to Critics. Duckworth.
Frey, V. and Martinez, J. (2024). Interpersonal trust
modelling through multi-agent reinforcement learn-
ing. Cognitive Systems Research, 83:101157.
Fung, H. L., Darvariu, V.-A., Hailes, S., and Musolesi, M.
(2022). Trust-based consensus in multi-agent rein-
forcement learning systems. ArXiv, abs/2205.12880.
Henderson, R. and Cockburn, I. (1994). Measuring com-
petence? exploring firm effects in pharmaceutical re-
search. Strategic management journal, 15(S1):63–84.
Jaffry, S. W. and Treur, J. (2013). Agent-Based and
Population-Based Modeling of Trust Dynamics, pages
124–151. Springer Berlin Heidelberg, Berlin, Heidel-
berg.
Mayer, R. and Davis, J. H. (1999). The effect of the perfor-
mance appraisal system on trust for management: A
field quasi-experiment. Journal of Applied Psychol-
ogy, 84:123–136.
Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995).
An integrative model of organizational trust. The
Academy of Management Review, 20(3):709–734.
McFall, L. (1987). Integrity. Ethics, 98(1):5–20.
Mcknight, D. H., Carter, M., Thatcher, J. B., and Clay, P. F.
(2011). Trust in a specific technology: An investi-
gation of its components and measures. ACM Trans.
Manage. Inf. Syst., 2(2).
Minza, M. (2019). Benevolence, competency, and integrity:
Which one is more influential on trust in friendships?
Jurnal Psikologi Vol, 18(1):91–105.
Mohajeri Parizi, M., Sileno, G., van Engers, T., and Klous,
S. (2020). Run, agent, run! architecture and bench-
marking of actor-based agents. In Proceedings of the
10th ACM SIGPLAN International Workshop on Pro-
gramming Based on Actors, Agents, and Decentral-
ized Control, AGERE 2020, page 11–20, New York,
NY, USA. Association for Computing Machinery.
Nobandegani, A. S., Rish, I., and Shultz, T. R. (2023). To-
wards machines that trust: Ai agents learn to trust in
the trust game. ArXiv, abs/2312.12868.
Parsons, S., Sklar, E., and McBurney, P. (2012). Using ar-
gumentation to reason with and about trust. In McBur-
ney, P., Parsons, S., and Rahwan, I., editors, Argumen-
tation in Multi-Agent Systems, pages 194–212, Berlin,
Heidelberg. Springer Berlin Heidelberg.
Poon, J. M. (2013). Effects of benevolence, integrity, and
ability on trust-in-supervisor. Employee Relations,
35(4):396–407.
Rao, A. S. and Georgeff, M. P. (1995). Bdi agents: From
theory to practice. In Proceedings of the First Interna-
tional Conference On Multi-Agent Systems (ICMAS-
95), pages 312–319.
Ruokonen, F. (2013). Trust, trustworthiness, and responsi-
bility. In Trust, pages 1–14. Brill.
Sapienza, A., Cantucci, F., and Falcone, R. (2022). Mod-
eling interaction in human-machine systems: A trust
and trustworthiness approach. Automation, 3(2):242–
257.
Tykhonov, D., Jonker, C., Meijer, S., and Verwaart, D.
(2008). Agent-based simulation of the trust and trac-
ing game for supply chains and networks. Journal of
Artificial Societies and Social Simulation, 11(3):1–32.
Wyner, A. and Zurek, T. (2024). Towards a formalisation of
motivated reasoning and the roots of conflict. In Os-
man, N. and Steels, L., editors, Value Engineering in
Artificial Intelligence, pages 28–45, Cham. Springer
Nature Switzerland.
Wyner, A. Z. and Zurek, T. (2023). On legal teleologi-
cal reasoning. In Sileno, G., Spanakis, J., and van
Dijck, G., editors, Legal Knowledge and Information
Systems - JURIX 2023: The Thirty-sixth Annual Con-
ference, Maastricht, The Netherlands, 18-20 Decem-
ber 2023, volume 379 of Frontiers in Artificial Intelli-
gence and Applications, pages 83–88. IOS Press.
Zurek, T. (2017). Goals, values, and reasoning. Expert
Systems with Applications, 71:442 – 456.
Zurek, T., Araszkiewicz, M., and Stachura-Zurek, D.
(2022). Reasoning with principles. Expert Systems
with Applications, 210:118496.
Zurek, T., Wyner, A., and van Engers, T. (2025). The model
of benevolence for trust in multi-agent system. To ap-
pear in: Proceedings of 18th KES International Con-
ference, KES-AMSTA 2024, June 2024.
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
384