tacks. In International conference on machine learn-
ing, pages 1964–1974. PMLR.
Dalvi, N., Domingos, P., Sanghai, S., and Verma, D. (2004).
Adversarial classification. In Proceedings of the tenth
ACM SIGKDD international conference on Knowl-
edge discovery and data mining, pages 99–108.
Duddu, V. (2018). A survey of adversarial machine learning
in cyber warfare. Defence Science Journal, 68(4).
Fredrikson, M., Jha, S., and Ristenpart, T. (2015). Model
inversion attacks that exploit confidence information
and basic countermeasures. In Proceedings of the
22nd ACM SIGSAC conference on computer and com-
munications security, pages 1322–1333.
Gao, N., Gao, L., Gao, Q., and Wang, H. (2014). An
intrusion detection model based on deep belief net-
works. In 2014 Second International Conference on
Advanced Cloud and Big Data, pages 247–252. IEEE.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Ex-
plaining and harnessing adversarial examples. arXiv
preprint arXiv:1412.6572.
Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P. S., and Zhang,
X. (2022). Membership inference attacks on machine
learning: A survey. ACM Computing Surveys (CSUR),
54(11s):1–37.
Lashkari, A. H., Draper-Gil, G., Mamun, M. S. I., and Ghor-
bani, A. A. (2017). Characterization of tor traffic using
time based features. In ICISSp, pages 253–262.
McCombes, S. (2022). Sampling methods — types,
techniques & examples. https://www.scribbr.com/
methodology/sampling-methods/.
Mohammadian, H., Ghorbani, A. A., and Lashkari, A. H.
(2023). A gradient-based approach for adversarial at-
tack on deep learning-based network intrusion detec-
tion systems. Applied Soft Computing, 137:110173.
Mohammadian, H., Lashkari, A. H., and Ghorbani, A. A.
(2022). Evaluating deep learning-based nids in adver-
sarial settings. In ICISSP, pages 435–444.
Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P.
(2016). Deepfool: a simple and accurate method to
fool deep neural networks. In Proceedings of the IEEE
conference on computer vision and pattern recogni-
tion, pages 2574–2582.
Papadopoulos, P., Thornewill von Essen, O., Pitropakis, N.,
Chrysoulas, C., Mylonas, A., and Buchanan, W. J.
(2021). Launching adversarial attacks against network
intrusion detection systems for iot. Journal of Cyber-
security and Privacy, 1(2):252–273.
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik,
Z. B., and Swami, A. (2017). Practical black-box at-
tacks against machine learning. In Proceedings of the
2017 ACM on Asia conference on computer and com-
munications security, pages 506–519.
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik,
Z. B., and Swami, A. (2016). The limitations of deep
learning in adversarial settings. In 2016 IEEE Euro-
pean symposium on security and privacy (EuroS&P),
pages 372–387. IEEE.
Peng, Y., Su, J., Shi, X., and Zhao, B. (2019). Evaluat-
ing deep learning based network intrusion detection
system in adversarial environment. In 2019 IEEE 9th
International Conference on Electronics Information
and Emergency Communication (ICEIEC), pages 61–
66. IEEE.
Pitropakis, N., Panaousis, E., Giannetsos, T., Anastasiadis,
E., and Loukas, G. (2019). A taxonomy and survey of
attacks against machine learning. Computer Science
Review, 34:100199.
Schwarzschild, A., Goldblum, M., Gupta, A., Dickerson,
J. P., and Goldstein, T. (2021). Just how toxic is data
poisoning? a unified benchmark for backdoor and
data poisoning attacks. In International Conference
on Machine Learning, pages 9389–9398. PMLR.
Sharafaldin, I., Lashkari, A. H., and Ghorbani, A. A.
(2018). Toward generating a new intrusion detec-
tion dataset and intrusion traffic characterization. In
ICISSP, pages 108–116.
Shokri, R., Stronati, M., Song, C., and Shmatikov, V.
(2017). Membership inference attacks against ma-
chine learning models. In 2017 IEEE symposium on
security and privacy (SP), pages 3–18. IEEE.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Er-
han, D., Goodfellow, I., and Fergus, R. (2013). In-
triguing properties of neural networks. arXiv preprint
arXiv:1312.6199.
Tabassi, E., Burns, K. J., Hadjimichael, M., Molina-
Markham, A. D., and Sexton, J. T. (2019). A taxon-
omy and terminology of adversarial machine learning.
NIST IR, pages 1–29.
Tian, Z., Cui, L., Liang, J., and Yu, S. (2022). A compre-
hensive survey on poisoning attacks and countermea-
sures in machine learning. ACM Computing Surveys,
55(8):1–35.
Truex, S., Liu, L., Gursoy, M. E., Yu, L., and Wei, W.
(2018). Towards demystifying membership inference
attacks. arXiv preprint arXiv:1807.09173.
Tsai, C.-F., Hsu, Y.-F., Lin, C.-Y., and Lin, W.-Y. (2009).
Intrusion detection by machine learning: A review. ex-
pert systems with applications, 36(10):11994–12000.
Wang, Z. (2018). Deep learning-based intrusion detection
with adversaries. IEEE Access, 6:38367–38384.
Wang, Z., Ma, J., Wang, X., Hu, J., Qin, Z., and Ren, K.
(2022). Threats to training: A survey of poisoning at-
tacks and defenses on machine learning systems. ACM
Computing Surveys, 55(7):1–36.
Warzy
´
nski, A. and Kołaczek, G. (2018). Intrusion detec-
tion systems vulnerability on adversarial examples. In
2018 Innovations in Intelligent Systems and Applica-
tions (INISTA), pages 1–4. IEEE.
Zhang, Y., Jia, R., Pei, H., Wang, W., Li, B., and Song,
D. (2020). The secret revealer: Generative model-
inversion attacks against deep neural networks. In
Proceedings of the IEEE/CVF conference on com-
puter vision and pattern recognition, pages 253–261.
Evaluating Label Flipping Attack in Deep Learning-Based NIDS
603