6 CONCLUSION AND FUTURE
WORKS
In this paper, we investigate the problem of adversar-
ial attack on deep learning models in the network do-
main. We chose two famous and well-known datasets:
CIC-DDoS2019 (Sharafaldin et al., 2019) and CIC-
IDS2017 (Sharafaldin et al., 2018) for our experi-
ments. Since CIC-DDoS2019 has more than 49 mil-
lions records and it is more than 16 times the records
in CIC-IDS2017, using these two datasets we can
verify the scalability of our method. We use CI-
CFlowMeter (Lashkari et al., 2017) to extract more
than 80 features from these datasets. From these
extracted features, 76 features are used to train our
deep learning model. We group these selected fea-
tures into six different categories based on their na-
ture: Forward, Backward, Flow-based, Time-based,
Packet Header-based and Packet Payload-based fea-
tures. We use each of these categories and a combi-
nation of them to generate adversarial examples for
our two datasets. Two different values are used as the
magnitude of adversarial attack perturbations: 0.001
and 0.01.
The reported results show that it is tough to make
a general decision for choosing the best groups of fea-
tures for all different types of network attacks. Also,
by comparing the results for two datasets, we found
out that the adversarial sample generation is harder
for CIC-DDoS2019 than CIC-IDS2017.
While the topic of adversarial attack on deep
learning model in network domain has been gaining
a lot of attention, there is still a big problem compar-
ing these kinds of attack in the image domain. The
main point in adversarial attack is to make sure that
the attacker did not change the nature of the original
sample completely. This is easily done in the image
domain by using a human observer. But in the net-
work domain, we cannot use a human expert, and it is
tough to make sure the changes we made to the fea-
tures of a flow did not change the nature of that flow.
For future works, the researcher should work on this
problem in the network domain.
REFERENCES
Ashfaq, R. A. R., Wang, X.-Z., Huang, J. Z., Abbas, H., and
He, Y.-L. (2017). Fuzziness based semi-supervised
learning approach for intrusion detection system. In-
formation Sciences, 378:484–497.
Biggio, B. and Roli, F. (2018). Wild patterns: Ten years
after the rise of adversarial machine learning. Pattern
Recognition, 84:317–331.
Buczak, A. L. and Guven, E. (2015). A survey of data min-
ing and machine learning methods for cyber security
intrusion detection. IEEE Communications surveys &
tutorials, 18(2):1153–1176.
Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C.-J.
(2017). Zoo: Zeroth order optimization based black-
box attacks to deep neural networks without train-
ing substitute models. In Proceedings of the 10th
ACM Workshop on Artificial Intelligence and Security,
pages 15–26.
Dalvi, N., Domingos, P., Sanghai, S., and Verma, D. (2004).
Adversarial classification. In Proceedings of the tenth
ACM SIGKDD international conference on Knowl-
edge discovery and data mining, pages 99–108.
Duddu, V. (2018). A survey of adversarial machine learning
in cyber warfare. Defence Science Journal, 68(4).
Gao, N., Gao, L., Gao, Q., and Wang, H. (2014). An
intrusion detection model based on deep belief net-
works. In 2014 Second International Conference on
Advanced Cloud and Big Data, pages 247–252. IEEE.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014a). Generative adversarial nets. In
Advances in neural information processing systems,
pages 2672–2680.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014b). Ex-
plaining and harnessing adversarial examples. arXiv
preprint arXiv:1412.6572.
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and
McDaniel, P. (2017). Adversarial examples for mal-
ware detection. In European Symposium on Research
in Computer Security, pages 62–79. Springer.
Hashemi, M. J., Cusack, G., and Keller, E. (2019). Towards
evaluation of nidss in adversarial setting. In Proceed-
ings of the 3rd ACM CoNEXT Workshop on Big DAta,
Machine Learning and Artificial Intelligence for Data
Communication Networks, pages 14–21.
Ibitoye, O., Shafiq, O., and Matrawy, A. (2019). Analyzing
adversarial attacks against deep learning for intrusion
detection in iot networks. In 2019 IEEE Global Com-
munications Conference (GLOBECOM), pages 1–6.
IEEE.
Kuppa, A., Grzonkowski, S., Asghar, M. R., and Le-Khac,
N.-A. (2019). Black box attacks on deep anomaly de-
tectors. In Proceedings of the 14th International Con-
ference on Availability, Reliability and Security, pages
1–10.
Lashkari, A. H., Draper-Gil, G., Mamun, M. S. I., and Ghor-
bani, A. A. (2017). Characterization of tor traffic using
time based features. In ICISSp, pages 253–262.
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik,
Z. B., and Swami, A. (2017). Practical black-box at-
tacks against machine learning. In Proceedings of the
2017 ACM on Asia conference on computer and com-
munications security, pages 506–519.
Peng, Y., Su, J., Shi, X., and Zhao, B. (2019). Evaluat-
ing deep learning based network intrusion detection
system in adversarial environment. In 2019 IEEE 9th
International Conference on Electronics Information
Evaluating Deep Learning-based NIDS in Adversarial Settings
443