
Gyevnar, B., Ferguson, N., and Schafer, B. (2023). Bridging
the transparency gap: What can explainable AI learn
from the AI Act? In 26th European Conference on
Artificial Intelligence, pages 964–971. IOS Press.
Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel,
D., Huang, K., Scardapane, S., Spinelli, I., Mahmud,
M., and Hussain, A. (2024). Interpreting black-box
models: a review on explainable artificial intelligence.
Cognitive Computation, 16(1):45–74.
Hawkins, R., Paterson, C., Picardi, C., Jia, Y., Calinescu,
R., and Habli, I. (2021). Guidance on the Assurance
of Machine Learning in Autonomous Systems (AM-
LAS).
Hennemann, M., Ebner, G. K., Karsten, B., Lienemann, G.,
and Wienroeder, M. (2024). Data Act: An Introduc-
tion. Nomos Verlagsgesellschaft mbH & Co. KG.
Hleg, A. (2019). Ethics guidelines for trustworthy AI. B-
1049 Brussels.
IEC (2010). IEC 61508(-1/7): Functional safety of elec-
trical / electronic / programmable electronic safety-
related systems.
IEC (2023). ISO/IEC WD 5338 Information technology
— Artificial intelligence — AI system life cycle pro-
cesses.
IEC (TBD). IEC TS 6254 - Information technology — Ar-
tificial intelligence — Objectives and approaches for
explainability of ML models and AI systems (Under
Development).
ISO (2018). ISO 26262(-1/11) Road vehicles – Functional
safety.
ISO (2023). ISO/CD PAS 8800 Road Vehicles — Safety
and artificial intelligence.
ISO/IEC (2020). ISO/IEC TR 24028 - Information tech-
nology — Artificial intelligence — Overview of trust-
worthiness in artificial intelligence.
ISO/IEC (2022). ISO/IEC TS 5723 - Trustworthiness —
Vocabulary.
ISO/IEC (2024). ISO/IEC TR 5469 Artificial intelligence
— Functional safety and AI systems.
Jia, Y., McDermid, J., Lawton, T., and Habli, I. (2022).
The role of explainability in assuring safety of ma-
chine learning in healthcare. IEEE Transactions on
Emerging Topics in Computing, 10(4):1746–1760.
Kanak, A., Erg
¨
un, S., Atalay, A. S., Persi, S., and Karcı,
A. E. H. (2022). A review and strategic approach for
the transition towards third-wave trustworthy and ex-
plainable ai in connected, cooperative and automated
mobility (ccam). In 2022 27th Asia Pacific Conference
on Communications (APCC), pages 108–113. IEEE.
Langer, M., Oster, D., Speith, T., Hermanns, H., K
¨
astner, L.,
Schmidt, E., Sesing, A., and Baum, K. (2021). What
do we want from Explainable Artificial Intelligence
(XAI)?–A stakeholder perspective on XAI and a con-
ceptual model guiding interdisciplinary XAI research.
Artificial Intelligence, 296:103473.
Mcknight, D. H., Carter, M., Thatcher, J. B., and Clay, P. F.
(2011). Trust in a specific technology: An investi-
gation of its components and measures. ACM Trans-
actions on management information systems (TMIS),
2(2):1–25.
Miller, T. (2019). Explanation in artificial intelligence: In-
sights from the social sciences. Artificial intelligence,
267:1–38.
Molnar, C. (2020). Interpretable machine learning. Lulu.
com.
Naranjo, R., Sintes, J., P
´
erez-Benito, C., Alonso, P., Del-
gado, G., Aranjuelo, N., and Jevti
´
c, A. (2024). Park
marking detection and tracking based on a vehicle on-
board system of fisheye cameras. In International
Conference on Robotics, Computer Vision and Intel-
ligent Systems, pages 31–46. Springer.
Neto, A. V. S., Camargo, J. B., Almeida, J. R., and Cug-
nasca, P. S. (2022). Safety assurance of artificial
intelligence-based systems: A systematic literature re-
view on the state of the art and guidelines for future
work. IEEE Access, 10:130733–130770.
OECD (2019). Recommendation of the Council on Artifi-
cial Intelligence, OECD/LEGAL/044.
Parraga, O., More, M. D., Oliveira, C. M., Gavenski,
N. S., Kupssinsk
¨
u, L. S., Medronha, A., Moura, L. V.,
Sim
˜
oes, G. S., and Barros, R. C. (2023). Fairness in
deep learning: A survey on vision and language re-
search. ACM Computing Surveys.
Perez-Cerrolaza, J. et al. (2023). Artificial Intelligence for
Safety-Critical Systems in Industrial and Transporta-
tion Domains: A Survey. ACM Comput. Surv.
Petsiuk, V., Jain, R., Manjunatha, V., Morariu, V. I., Mehra,
A., Ordonez, V., and Saenko, K. (2021). Black-box
explanation of object detectors via saliency maps. In
Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition, pages 11443–
11452.
Pham, M. and Xiong, K. (2021). A survey on secu-
rity attacks and defense techniques for connected
and autonomous vehicles. Computers & Security,
109:102269.
VDA (2023). Automotive SPICE® Process Assessment /
Reference Model Version 4.0.
APPENDIX
Here we present the set of rules designed to make the
self-assessment of an AI system using the proposed
data model. The rules are presented as pseudo-code
in Algorithms 1 to 11.
if AISystem.deterministic then
AISystem.reproducible = True
end
Algorithm 1: Reproducibility rule.
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
470