Jan, Z., Ahamed, F., Mayer, W., Patel, N., Grossmann, G.,
Stumptner, M., and Kuusk, A. (2023). Artificial in-
telligence for industry 4.0: Systematic review of ap-
plications, challenges, and opportunities. Expert Syst.
Appl.
Lin, Z., Shi, Y., and Xue, Z. (2022). IDSGAN: Genera-
tive adversarial networks for attack generation against
intrusion detection. In Advances in Knowledge Dis-
covery and Data Mining. Springer International Pub-
lishing.
Long, T., Gao, Q., Xu, L., and Zhou, Z. (2022). A survey
on adversarial attacks in computer vision: Taxonomy,
visualization and future directions. Comput. Secur.
Lundberg, S. M. and Lee, S.-I. (2017). A unified approach
to interpreting model predictions. Advances in neural
information processing systems.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2019). Towards deep learning models re-
sistant to adversarial attacks.
Malik, A.-E., Andresini, G., Appice, A., and Malerba, D.
(2022). An xai-based adversarial training approach
for cyber-threat detection. In 2022 IEEE Intl Conf on
Dependable, Autonomic and Secure Computing.
Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P.
(2016). Deepfool: a simple and accurate method to
fool deep neural networks.
Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A.,
and Jha, N. K. (2015). Systematic poisoning attacks
on and defenses for machine learning in healthcare.
IEEE Journal of Biomedical and Health Informatics.
Mu
˜
noz-Gonz
´
alez, L., Biggio, B., Demontis, A., Paudice,
A., Wongrassamee, V., Lupu, E. C., and Roli, F.
(2017). Towards poisoning of deep learning algo-
rithms with back-gradient optimization.
Nawaz, R., Shahid, M. A., Qureshi, I. M., and Mehmood,
M. H. (2018). Machine learning based false data in-
jection in smart grid. In 2018 1st International Con-
ference on Power, Energy and Smart Grid (ICPESG).
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik,
Z. B., and Swami, A. (2016). The limitations of deep
learning in adversarial settings. In 2016 IEEE Euro-
pean Symposium on Security and Privacy (EuroS&P).
Papernot, N., McDaniel, P. D., Goodfellow, I. J., Jha, S.,
Celik, Z. B., and Swami, A. (2017). Practical black-
box attacks against machine learning. In Proceedings
of the 2017 ACM on Asia Conference on Computer
and Communications Security. ACM.
Park, N., Mohammadi, M., Gorde, K., Jajodia, S., Park, H.,
and Kim, Y. (2018). Data synthesis based on gener-
ative adversarial networks. Proceedings of the VLDB
Endowment.
Parnas, D. L. (2017). The real risks of artificial intelligence.
Commun. ACM.
Qiu, S., Liu, Q., Zhou, S., and Huang, W. (2022). Adversar-
ial attack and defense technologies in natural language
processing: A survey. Neurocomputing.
Qiu, S., Liu, Q., Zhou, S., and Wu, C. (2019). Review
of artificial intelligence adversarial attack and defense
technologies. Applied Sciences.
Randhawa, R. H., Aslam, N., Alauthman, M., and Rafiq, H.
(2022). Evagan: Evasion generative adversarial net-
work for low data regimes.
Renjith G, Sonia Laudanna, A. S.-C. A. V. V. P. (2022).
Gang-mam: Gan based engine for modifying android
malware.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why
should i trust you?” explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and
data mining.
Rosenberg, I., Shabtai, A., Elovici, Y., and Rokach, L.
(2021). Adversarial machine learning attacks and de-
fense methods in the cyber security domain. ACM
Computing Surveys (CSUR).
Sern, L. J., David, Y. G. P., and Hao, C. J. (2020). Phish-
GAN: Data augmentation and identification of homo-
glyph attacks. In 2020 International Conference on
Communications, Computing, Cybersecurity, and In-
formatics (CCCI).
Shi, Y. and Sagduyu, Y. E. (2017). Evasion and causative
attacks with adversarial deep learning. In MILCOM
2017 - 2017 IEEE Military Communications Confer-
ence (MILCOM).
Simonyan, K. and Zisserman, A. (2014). Very deep con-
volutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556.
Slack, D., Hilgard, S., Jia, E., Singh, S., and Lakkaraju, H.
(2020). Fooling lime and shap: Adversarial attacks on
post hoc explanation methods. In Proceedings of the
AAAI/ACM Conference on AI, Ethics, and Society.
Stavroula Bourou, Andreas El Saer, T.-H. V. A. V. and Za-
hariadis, T. (2021). A review of tabular data synthesis
using gans on an ids dataset.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan,
D., Goodfellow, I., and Fergus, R. (2014). Intriguing
properties of neural networks.
Wang, D., Li, C., Wen, S., Nepal, S., and Xiang, Y. (2020).
Defending against adversarial attack towards deep
neural networks via collaborative multi-task training.
Wang, J., Yan, X., Liu, L., Li, L., and Yu, Y. (2022). Cttgan:
Traffic data synthesizing scheme based on conditional
gan. Sensors.
Wang, X., Li, J., Kuang, X., Tan, Y., and Li, J. (2019). The
security of machine learning in an adversarial setting:
A survey. J. Parallel Distributed Comput.
Xu, J., Sun, Y., Jiang, X., Wang, Y., Yang, Y., Wang, C., and
Lu, J. (2021). Blindfolded attackers still threatening:
Strict black-box adversarial attacks on graphs.
Xu, L., Skoularidou, M., Cuesta-Infante, A., and Veera-
machaneni, K. (2019). Modeling tabular data using
conditional gan.
Xu, W., Evans, D., and Qi, Y. (2018). Feature squeez-
ing: Detecting adversarial examples in deep neural
networks.
Study on Adversarial Attacks Techniques, Learning Methods and Countermeasures: Application to Anomaly Detection
517