Authors:
Mohamed Khedher
1
;
Afef Awadid
1
;
Augustin Lemesle
2
and
Zakaria Chihani
2
Affiliations:
1
IRT - SystemX, 2 Bd Thomas Gibert, 91120 Palaiseau, France
;
2
CEA, The French Alternative Energies and Atomic Energy Commission, France
Keyword(s):
Uncertainty in AI, AI Verification, AI Robustness, Adversarial Attacks, Formal Evaluation, Industrial Application.
Abstract:
The paper introduces a three-stage evaluation pipeline for ensuring the robustness of AI models, particularly neural networks, against adversarial attacks. The first stage involves formal evaluation, which may not always be feasible. For such cases, the second stage focuses on evaluating the model’s robustness against intelligent adversarial attacks. If the model proves vulnerable, the third stage proposes techniques to improve its robustness. The paper outlines the details of each stage and the proposed solutions. Moreover, the proposal aims to help developers build reliable and trustworthy AI systems that can operate effectively in critical domains, where the use of AI models can pose significant risks to human safety.