Authors:
Hesamodin Mohammadian
;
Arash Habibi Lashkari
and
Ali A. Ghorbani
Affiliation:
Canadian Institute for Cybersecurity, University of New Brunswick, Fredericton, New Brunswick, Canada
Keyword(s):
Network Intrusion Detection, Deep Learning, Adversarial Attack.
Abstract:
The intrusion detection systems are a critical component of any cybersecurity infrastructure. With the increase in speed and density of network traffic, the intrusion detection systems are incapable of efficiently detecting these attacks. During recent years, deep neural networks have demonstrated their performance and efficiency in several machine learning tasks, including intrusion detection. Nevertheless, recently, it has been found that deep neural networks are vulnerable to adversarial examples in the image domain. In this paper, we evaluate the adversarial example generation in malicious network activity classification. We use CIC-IDS2017 and CIC-DDoS2019 datasets with 76 different network features and try to find the most suitable features for generating adversarial examples in this domain. We group these features into different categories based on their nature. The result of the experiments shows that since these features are dependent and related to each other, it is impossi
ble to make a general decision that can be supported for all different types of network attacks. After the group of All features with 38.22% success in CIC-IDS2017 and 39.76% in CIC-DDoS2019 with ε value of 0.01, the combination of Forward, Backward and Flow-based feature groups with 23.28% success in CIC-IDS2017 and 36.65% in CIC-DDoS2019 with ε value of 0.01 and the combination of Forward and Backward feature groups have the highest potential for adversarial attacks.
(More)