
REFERENCES
Albrecht, S. V., Christianos, F., and Sch
¨
afer, L. (2024).
Multi-Agent Reinforcement Learning: Foundations
and Modern Approaches. MIT Press.
Angelidou, M., Politis, C., Panori, A., Bakratsas, T., and
Fellnhofer, K. (2022). Emerging smart city, transport
and energy trends in urban settings: Results of a pan-
European foresight exercise with 120 experts. Techno-
logical Forecasting and Social Change, 183:121915.
Booch, G., Rumbaugh, J., and Jacobson, I. (2005). Unified
Modeling Language User Guide, The (2nd Edition)
(Addison-Wesley Object Technology Series). Addison-
Wesley Professional, 2nd edition.
Bus¸oniu, L., Babu
ˇ
ska, R., and De Schutter, B. (2010).
Multi-agent Reinforcement Learning: An Overview.
In Srinivasan, D. and Jain, L. C., editors, Innovations
in Multi-Agent Systems and Applications - 1, pages
183–221. Springer, Berlin, Heidelberg.
Carloni, G., Berti, A., and Colantonio, S. (2023). The
role of causality in explainable artificial intelligence.
arXiv:2309.09901 [cs].
Casini, L. and Manzo, G. (2016). Agent-based models and
causality: a methodological appraisal. Link
¨
oping
University Electronic Press.
Chi, V. B. and Malle, B. F. (2023). Calibrated Human-Robot
Teaching: What People Do When Teaching Norms to
Robots*. In 2023 32nd IEEE International Confer-
ence on Robot and Human Interactive Communica-
tion (RO-MAN), pages 1308–1314. ISSN: 1944-9437.
de Brito Duarte, R., Correia, F., Arriaga, P., and Paiva, A.
(2023). AI Trust: Can Explainable AI Enhance War-
ranted Trust? Human Behavior and Emerging Tech-
nologies, 2023:e4637678. Publisher: Hindawi.
de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S.,
Shaw, T. H., Pak, R., and Neerincx, M. A. (2020). To-
wards a Theory of Longitudinal Trust Calibration in
Human–Robot Teams. International Journal of Social
Robotics, 12(2):459–478.
Directorate-General for Communications Networks, Con-
tent and Technology (European Commission) (2020).
The Assessment List for Trustworthy Artificial Intelli-
gence (ALTAI) for self assessment. Publications Office
of the European Union.
Gamma, E., Helm, R., Johnson, R., Vlissides, J., and
Booch, G. (1994). Design Patterns: Elements of
Reusable Object-Oriented Software. Addison-Wesley
Professional, Reading, Mass, 1st edition edition.
Ganguly, N., Fazlija, D., Badar, M., Fisichella, M., Sikdar,
S., Schrader, J., Wallat, J., Rudra, K., Koubarakis, M.,
Patro, G. K., Amri, W. Z. E., and Nejdl, W. (2023). A
Review of the Role of Causality in Developing Trust-
worthy AI Systems. arXiv:2302.06975 [cs].
Goldstein, M. and Goldstein, I. F. (1978). How We Know:
An Exploration of the Scientific Process. Westview
Press.
Gower, B. (1996). Scientific Method: A Historical and
Philosophical Introduction. Routledge, London.
Griffin, C., Wallace, D., Mateos-Garcia, J., Schieve, H.,
and Kohli, P. (2024). A new golden age of discovery.
Technical report, DeepMind.
Grimbly, S. J., Shock, J., and Pretorius, A. (2021). Causal
Multi-Agent Reinforcement Learning: Review and
Open Problems. arXiv:2111.06721 [cs].
Gronauer, S. and Diepold, K. (2022). Multi-agent deep re-
inforcement learning: a survey. Artificial Intelligence
Review, 55(2):895–943.
Hashem, I. A. T., Usmani, R. S. A., Almutairi, M. S.,
Ibrahim, A. O., Zakari, A., Alotaibi, F., Alhashmi,
S. M., and Chiroma, H. (2023). Urban Computing for
Sustainable Smart Cities: Recent Advances, Taxon-
omy, and Open Research Challenges. Sustainability,
15(5):3916. Number: 5 Publisher: Multidisciplinary
Digital Publishing Institute.
Jacovi, A., Marasovi
´
c, A., Miller, T., and Goldberg, Y.
(2021). Formalizing Trust in Artificial Intelligence:
Prerequisites, Causes and Goals of Human Trust in
AI. In Proceedings of the 2021 ACM Conference on
Fairness, Accountability, and Transparency, FAccT
’21, pages 624–635, New York, NY, USA. Associa-
tion for Computing Machinery.
Jamieson, K. H., Kearney, W., and Mazza, A.-M., editors
(2024). Realizing the Promise and Minimizing the
Perils of AI for Science and the Scientific Community.
University of Pennsylvania Press.
Jiao, L., Wang, Y., Liu, X., Li, L., Liu, F., Ma, W., Guo, Y.,
Chen, P., Yang, S., and Hou, B. (2024). Causal Infer-
ence Meets Deep Learning: A Comprehensive Survey.
Research, 7:0467. Publisher: American Association
for the Advancement of Science.
Kaelbling, L. P., Littman, M. L., and Cassandra, A. R.
(1998). Planning and acting in partially observable
stochastic domains. Artificial Intelligence, 101(1):99–
134.
Larsen, B., Li, C., Teeuwen, S., Denti, O., DePerro, J., and
Raili, E. (2024). Navigating the AI Frontier: A Primer
on the Evolution and Impact of AI Agents. Technical
report, World Economic Forum.
Lewis, J. D. and Weigert, A. (1985). Trust as a Social Real-
ity. Social Forces, 63(4):967–985.
Lewis, P. R. and Marsh, S. (2022). What is it like to trust
a rock? A functionalist perspective on trust and trust-
worthiness in artificial intelligence. Cognitive Systems
Research, 72:33–49.
Maes, S., Meganck, S., and Manderick, B. (2007). Infer-
ence in multi-agent causal models. International Jour-
nal of Approximate Reasoning, 46(2):274–299.
Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995).
An Integrative Model of Organizational Trust. The
Academy of Management Review, 20(3):709–734.
Publisher: Academy of Management.
Meyer-Vitali, A. and Mulder, W. (2023). Causing Intended
Effects in Collaborative Decision-Making. In Mu-
rukannaiah, P. K. and Hirzle, T., editors, Proceedings
of the Workshops at the Second International Confer-
ence on Hybrid Human-Artificial Intelligence, volume
3456 of CEUR Workshop Proceedings, pages 137–
144, Munich, Germany. CEUR. ISSN: 1613-0073.
MBSE-AI Integration 2025 - 2nd Workshop on Model-based System Engineering and Artificial Intelligence
440