
Biggio, B. and Roli, F. (2018). Wild patterns: Ten years
after the rise of adversarial machine learning. In
Proceedings of the 2018 ACM SIGSAC Conference
on Computer and Communications Security, pages
2154–2156.
Chacon, H., Silva, S., and Rad, P. (2019). Deep learning
poison data attack detection. In 2019 IEEE 31st In-
ternational Conference on Tools with Artificial Intel-
ligence (ICTAI), pages 971–978. IEEE.
Chen, J., Lu, H., Huo, W., Zhang, S., Chen, Y., and Yao,
Y. (2022). A defense method against backdoor attacks
in neural networks using an image repair technique.
In 2022 12th International Conference on Information
Technology in Medicine and Education (ITME), pages
375–380.
Chen, X., Liu, C., Li, B., Lu, K., and Song, D. (2017). Tar-
geted backdoor attacks on deep learning systems us-
ing data poisoning. arXiv preprint arXiv:1712.05526.
Chen, Y., Gong, X., Wang, Q., Di, X., and Huang, H.
(2020). Backdoor attacks and defenses for deep neu-
ral networks in outsourced cloud environments. IEEE
Network, 34(5):141–147.
Chudasama, D., Patel, T., Joshi, S., and Prajapati, G. I.
(2015). Image segmentation using morphological op-
erations. International Journal of Computer Applica-
tions, 117(18).
Gu, T., Liu, K., Dolan-Gavitt, B., and Garg, S. (2019). Bad-
nets: Evaluating backdooring attacks on deep neural
networks. IEEE Access, 7:47230–47244.
Guan, J., Liang, J., and He, R. (2024). Backdoor defense
via test-time detecting and repairing. In Proceedings
of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 24564–24573.
Hong, S., Carlini, N., and Kurakin, A. (2022). Handcrafted
backdoors in deep neural networks. Advances in Neu-
ral Information Processing Systems, 35:8068–8080.
Hu, B. and Chang, C.-H. (2024). Diffense: Defense against
backdoor attacks on deep neural networks with latent
diffusion. IEEE Journal on Emerging and Selected
Topics in Circuits and Systems, pages 1–1.
Lata, K., Singh, P., and Saini, S. (2024). Exploring
model poisoning attack to convolutional neural net-
work based brain tumor detection systems. In 2024
25th International Symposium on Quality Electronic
Design (ISQED), pages 1–7. IEEE.
Matsuo, Y. and Takemoto, K. (2021). Backdoor attacks to
deep neural network-based system for covid-19 de-
tection from chest x-ray images. Applied Sciences,
11(20):9556.
Namiot, D. (2023). Introduction to data poison attacks on
machine learning models. International Journal of
Open Information Technologies, 11(3):58–68.
Tian, Z., Cui, L., Liang, J., and Yu, S. (2022). A compre-
hensive survey on poisoning attacks and countermea-
sures in machine learning. ACM Computing Surveys,
55(8):1–35.
Truong, L., Jones, C., Hutchinson, B., August, A., Prag-
gastis, B., Jasper, R., Nichols, N., and Tuor, A. (2020).
Systematic evaluation of backdoor data poisoning at-
tacks on image classifiers. In Proceedings of the
IEEE/CVF conference on computer vision and pattern
recognition workshops, pages 788–789.
Van, M.-H., Carey, A. N., and Wu, X. (2023). Hint: Healthy
influential-noise based training to defend against data
poisoning attacks.
Xie, C., Huang, K., Chen, P.-Y., and Li, B. (2019). Dba:
Distributed backdoor attacks against federated learn-
ing. In International conference on learning repre-
sentations.
Yan, H., Zhang, W., Chen, Q., Li, X., Sun, W., Li, H., and
Lin, X. (2024). Recess vaccine for federated learn-
ing: Proactive defense against model poisoning at-
tacks. Advances in Neural Information Processing
Systems, 36.
Yerlikaya, F. A. and Bahtiyar, S¸. (2022). Data poisoning
attacks against machine learning algorithms. Expert
Systems with Applications, 208:118101.
Yuan, Y., Kong, R., Xie, S., Li, Y., and Liu, Y. (2023).
Patchbackdoor: Backdoor attack against deep neural
networks without model modification. In Proceedings
of the 31st ACM International Conference on Multi-
media, pages 9134–9142.
Zhang, Y., Feng, F., Liao, Z., Li, Z., and Yao, S. (2023).
Universal backdoor attack on deep neural networks
for malware detection. Applied Soft Computing,
143:110389.
Zhao, B. and Lao, Y. (2022). Towards class-oriented poi-
soning attacks against neural networks. In Proceed-
ings of the IEEE/CVF Winter Conference on Applica-
tions of Computer Vision, pages 3741–3750.
Evaluating and Defending Backdoor Attacks in Image Recognition Systems
275