
Croce, F. and Hein, M. (2021). Mind the box: l
1
-apgd
for sparse adversarial attacks on image classifiers. In
ICML.
Cui, J., Tian, Z., Zhong, Z., Qi, X., Yu, B., and Zhang, H.
(2023). Decoupled kullback-leibler divergence loss.
Dong, Y., Deng, Z., Pang, T., Su, H., and Zhu, J. (2020).
Adversarial distributional training for robust deep
learning. In Advances in Neural Information Process-
ing Systems.
Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., and Li,
J. (2018). Boosting adversarial attacks with momen-
tum.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Ex-
plaining and harnessing adversarial examples.
Gowal, S., Rebuffi, S.-A., Wiles, O., Stimberg, F., Calian,
D. A., and Mann, T. (2021). Improving robustness
using generated data.
Huang, Z., Fan, Y., Liu, C., Zhang, W., Zhang, Y., Salz-
mann, M., S
¨
usstrunk, S., and Wang, J. (2022). Fast
adversarial training with adaptive step size.
Jia, X., Zhang, Y., Wu, B., Ma, K., Wang, J., and Cao, X.
(2022). Las-at: Adversarial training with learnable
attack strategy.
Khamaiseh, S., Al-Alaj, A., Adnan, M., and Alomari, H. W.
(2022a). The robustness of detecting known and un-
known ddos saturation attacks in sdn via the integra-
tion of supervised and semi-supervised classifiers. Fu-
ture Internet, 14(6).
Khamaiseh, S. Y., Al-Alaj, A., and Warner, A. (2020).
Flooddetector: Detecting unknown dos flooding at-
tacks in sdn. In 2020 International Conference on In-
ternet of Things and Intelligent Applications (ITIA),
pages 1–5.
Khamaiseh, S. Y., Bagagem, D., Al-Alaj, A., Mancino, M.,
Alomari, H., and Aleroud, A. (2023). Target-x: An
efficient algorithm for generating targeted adversarial
images to fool neural networks. In 2023 IEEE 47th
Annual Computers, Software, and Applications Con-
ference (COMPSAC), pages 617–626.
Khamaiseh, S. Y., Bagagem, D., Al-Alaj, A., Mancino,
M., and Alomari, H. W. (2022b). Adversarial deep
learning: A survey on adversarial attacks and defense
mechanisms on image classification. IEEE Access,
10:102266–102291.
Kosuge, A., Sumikawa, R., Hsu, Y.-C., Shiba, K., Hamada,
M., and Kuroda, T. (2023). A 183.4nj/inference
152.8muw single-chip fully synthesizable wired-logic
dnn processor for always-on 35 voice commands
recognition application. In 2023 IEEE Symposium on
VLSI Technology and Circuits (VLSI Technology and
Circuits), pages 1–2.
Kundu, S., Nazemi, M., Beerel, P. A., and Pedram, M.
(2021). Dnr: A tunable robust pruning framework
through dynamic network rewiring of dnns. In Pro-
ceedings of the 26th Asia and South Pacific Design
Automation Conference, pages 344–350.
Kurakin, A., Goodfellow, I., and Bengio, S. (2016). Ad-
versarial machine learning at scale. arXiv preprint
arXiv:1611.01236.
Lee, S., Lee, H., and Yoon, S. (2020). Adversarial vertex
mixup: Toward better adversarially robust generaliza-
tion.
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., and Zhu,
J. (2018). Defense against adversarial attacks using
high-level representation guided denoiser. In Proceed-
ings of the IEEE conference on computer vision and
pattern recognition, pages 1778–1787.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2019). Towards deep learning models re-
sistant to adversarial attacks.
Pang, T., Lin, M., Yang, X., Zhu, J., and Yan, S. (2022).
Robustness and accuracy could be reconcilable by
(proper) definition.
Papernot, N. and McDaniel, P. (2017). Extending defensive
distillation.
Shi, L. and Liu, W. (2024). A closer look at curriculum
adversarial training: From an online perspective. In
Proceedings of the AAAI Conference on Artificial In-
telligence, volume 38, pages 14973–14981. Associa-
tion for the Advancement of Artificial Intelligence.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Er-
han, D., Goodfellow, I., and Fergus, R. (2013). In-
triguing properties of neural networks. arXiv preprint
arXiv:1312.6199.
Tang, K., Huang, J., and Zhang, H. (2020). Long-tailed
classification by keeping the good and removing the
bad momentum causal effect. Advances in Neural In-
formation Processing Systems, 33:1513–1524.
Tian, Q., Kuang, K., Jiang, K., Wu, F., and Wang, Y. (2021).
Analysis and applications of class-wise robustness in
adversarial training. KDD ’21, page 1561–1570, New
York, NY, USA. Association for Computing Machin-
ery.
Wang, Y., Ma, X., Chen, Z., Luo, Y., Yi, J., and Bailey, J.
(2019). Symmetric cross entropy for robust learning
with noisy labels. In Proceedings of the IEEE/CVF
international conference on computer vision, pages
322–330.
Wu, D., tao Xia, S., and Wang, Y. (2020). Adversarial
weight perturbation helps robust generalization.
Xu, Y., Sun, Y., Goldblum, M., Goldstein, T., and Huang, F.
(2023). Exploring and exploiting decision boundary
dynamics for adversarial robustness.
Zhang, D., Zhang, T., Lu, Y., Zhu, Z., and Dong, B.
(2019a). You only propagate once: Accelerating
adversarial training via maximal principle. arXiv
preprint arXiv:1905.00877.
Zhang, H., Yu, Y., Jiao, J., Xing, E. P., Ghaoui, L. E., and
Jordan, M. I. (2019b). Theoretically principled trade-
off between robustness and accuracy. In International
Conference on Machine Learning.
Zhang, J., Xu, X., Han, B., Niu, G., Cui, L., Sugiyama, M.,
and Kankanhalli, M. (2020). Attacks which do not kill
training make adversarial learning stronger.
Zhang, J., Zhu, J., Niu, G., Han, B., Sugiyama, M., and
Kankanhalli, M. (2021). Geometry-aware instance-
reweighted adversarial training. In International Con-
ference on Learning Representations.
Powerful & Generalizable, Why not both? VA: Various Attacks Framework for Robust Adversarial Training
239