Authors:
Ruben Naranjo
1
;
Nerea Aranjuelo
1
;
Marcos Nieto
1
;
Itziar Urbieta
1
;
Javier Fernández
2
and
Itsaso Rodríguez-Moreno
3
Affiliations:
1
Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Mikeletegi 57, 20009, Donostia-San Sebastian, Spain
;
2
Ikerlan Technological Research Center, Basque Research and Technology Alliance (BRTA), José María Arizmendiarreta 2, 20500, Arrasate/Mondragón, Spain
;
3
University of the Basque Country (UPV/EHU), Donostia-San Sebastian, Spain
Keyword(s):
AI Trustworthiness, Ontology, Data Model, CCAM, Trustworthiness Assessment.
Abstract:
Amidst the growing landscape of trustworthiness-related initiatives and works both in the academic community and from official EU groups, there is a lack of coordination in the nature of the concepts used in these works and their relationships. This lack of coordination generates confusion and hinders the advances in trustworthy AI systems. This confusion is particularly grave in the CCAM domain given nearly all functionalities related to vehicles are safety-critical applications and need to be perceived as trustworthy in order for them to become available to the general public. In this paper, we propose the use of a defined set of terms and their definitions, carefully selected from the existing reports, regulations, and academic papers; and construct an ontology-based data model that can assist any user in the comprehension of those terms and their relationship to one another. In addition, we implement this data model as a tool that guides users on the self-assessment of the trustw
orthiness of an AI system. We use a graph database that allows making queries and automating the assessment of any particular AI system. We demonstrate the latter with a practical use case that makes an automated trustworthiness assessment based on user-inputted data.
(More)