Authors:
Anis Bouaziz
1
;
Manh-Dung Nguyen
1
;
Valeria Valdés
1
;
Ana Cavalli
1
;
2
and
Wissam Mallouli
1
Affiliations:
1
Montimage EURL, 39 rue Bobillot 75013, Paris, France
;
2
Institut Telecom SudParis, 5 rue Charles Fourrier 91011 Evry, France
Keyword(s):
Cybersecurity, Artificial Intelligence, Adversarial Attacks, Explainability, Countermeasures.
Abstract:
Adversarial attacks on AI systems are designed to exploit vulnerabilities in the AI algorithms that can be used to manipulate the output of the system, resulting in incorrect or harmful behavior. They can take many forms, including manipulating input data, exploiting weaknesses in the AI model, and poisoning the training samples used to develop the AI model. In this paper, we study different types of adversarial attacks, including evasion, poisoning, and inference attacks, and their impact on AI-based systems from different fields. A particular emphasis is placed on cybersecurity applications, such as Intrusion Detection System (IDS) and anomaly detection. We also depict different learning methods that allow us to understand how adversarial attacks work using eXplainable AI (XAI). In addition, we discuss the current state-of-the-art techniques for detecting and defending against adversarial attacks, including adversarial training, input sanitization, and anomaly detection. Furthermor
e, we present a comprehensive analysis of the effectiveness of different defense mechanisms against different types of adversarial attacks. Overall, this study provides a comprehensive overview of challenges and opportunities in the field of adversarial machine learning, and serves as a valuable resource for researchers, practitioners, and policymakers working on AI security and robustness. An application for anomaly detection, especially malware detection is presented to illustrate several concepts presented in the paper.
(More)