Distortion-Aware Adversarial Attacks on Bounding Boxes of Object Detectors
Pham Phuc, Son Vuong, Son Vuong, Khang Nguyen, Tuan Dang
2025
Abstract
Deep learning-based object detection has become ubiquitous in the last decade due to its high accuracy in many real-world applications. With this growing trend, these models are interested in being attacked by adversaries, with most of the results being on classifiers, which do not match the context of practical object detection. In this work, we propose a novel method to fool object detectors, expose the vulnerability of state-of-the-art detectors, and promote later works to build more robust detectors to adversarial examples. Our method aims to generate adversarial images by perturbing object confidence scores during training, which is crucial in predicting confidence for each class in the testing phase. Herein, we provide a more intuitive technique to embed additive noises based on detected objects’ masks and the training loss with distortion control over the original image by leveraging the gradient of iterative images. To verify the proposed method, we perform adversarial attacks against different object detectors, including the most recent state-of-the-art models like YOLOv8, Faster R-CNN, RetinaNet, and Swin Transformer. We also evaluate our technique on MS COCO 2017 and PASCAL VOC 2012 datasets and analyze the trade-off between success attack rate and image distortion. Our experiments show that the achievable success attack rate is up to 100% and up to 98% when performing white-box and black-box attacks, respectively. The source code and relevant documentation for this work are available at the following link https://github.com/anonymous20210106/attack detector.git.
DownloadPaper Citation
in Harvard Style
Phuc P., Vuong S., Nguyen K. and Dang T. (2025). Distortion-Aware Adversarial Attacks on Bounding Boxes of Object Detectors. In Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP; ISBN 978-989-758-728-3, SciTePress, pages 27-38. DOI: 10.5220/0013101900003912
in Bibtex Style
@conference{visapp25,
author={Pham Phuc and Son Vuong and Khang Nguyen and Tuan Dang},
title={Distortion-Aware Adversarial Attacks on Bounding Boxes of Object Detectors},
booktitle={Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP},
year={2025},
pages={27-38},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013101900003912},
isbn={978-989-758-728-3},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP
TI - Distortion-Aware Adversarial Attacks on Bounding Boxes of Object Detectors
SN - 978-989-758-728-3
AU - Phuc P.
AU - Vuong S.
AU - Nguyen K.
AU - Dang T.
PY - 2025
SP - 27
EP - 38
DO - 10.5220/0013101900003912
PB - SciTePress