cuss the impact of these choices on query efficiency.
6.3.1 Population Size
The population size plays a crucial role in balancing
exploration and exploitation within the search space.
A larger population size enhances diversity and im-
proves exploration, leading to better search space cov-
erage in fewer iterations. However, this advantage
comes with a trade-off, as evaluating each population
member incurs a query cost. Figure 7 illustrates this
trade-off by showcasing the mean number of queries
and iterations until success across various population
sizes on a dataset of 20 images. Based on this exper-
iment, we advocate for a relatively small population
size of six, striking a balance between convergence
speed and the total number of queries expended.
Figure 7: Effect of population size selection on both the
speed of convergence and the number of queries.
6.3.2 Mutation Rate
The mutation rate, denoted as ρ, significantly influ-
ences algorithm performance. Experimentally, we
explored different mutation rate strategies and found
that a fixed mutation rate outperformed other ap-
proaches. The fixed mutation rate effectively balances
exploration and exploitation, contributing to the algo-
rithm’s overall success without the need for adaptive
adjustments.
7 CONCLUSIONS
In this study, we introduced GenGradAttack, a pi-
oneering approach that seamlessly integrates genetic
algorithms and gradient-based optimization for black-
box adversarial attacks. Our results showcase the im-
pressive efficacy of GenGradAttack, achieving no-
table Adversarial Success Rates (ASR) with reduced
query counts. Notably, on the MNIST dataset, we at-
tained a 95.06% ASR with a median query count of
556, outperforming conventional GenAttack.
The success of GenGradAttack stems from its
ability to evolve perturbations that effectively mis-
lead the target model, demonstrating the potency of
genetic algorithms in generating adversarial perturba-
tions. Moreover, the combination of gradient-based
optimization with genetic algorithms leads to faster
convergence, higher ASRs, and query-efficient at-
tacks.
While our achievements are significant, this re-
search lays the groundwork for future exploration.
Further analysis, including extensive experimentation
and the incorporation of adaptive learning rate strate-
gies, holds the potential to enhance the attack’s ef-
fectiveness. Delving into factors influencing transfer-
ability could yield more universally effective adver-
sarial perturbations.
In summary, our research advances the landscape
of adversarial black-box attacks, providing a robust
tool for evaluating the vulnerabilities of machine-
learning models. We anticipate that this work will in-
spire continued exploration in the dynamic realm of
adversarial attacks and defenses.
ACKNOWLEDGEMENTS
This work was supported, in part, by the Engineering
and Physical Sciences Research Council [grant num-
ber EP/X036871/1] and Horizon Europe [grant num-
ber HORIZON-MISS-2022-CIT-01-01].
REFERENCES
Agnihotri, S. and Keuper, M. (2023). Cospgd: a unified
white-box adversarial attack for pixel-wise prediction
tasks.
Alzantot, M., Sharma, Y., Chakraborty, S., Zhang, H.,
Hsieh, C.-J., and Srivastava, M. B. (2019). Genat-
tack: Practical black-box attacks with gradient-free
optimization. In Proceedings of the genetic and evo-
lutionary computation conference, pages 1111–1119.
Bhandari, D., Murthy, C., and Pal, S. K. (1996). Genetic al-
gorithm with elitist model and its convergence. Inter-
national journal of pattern recognition and artificial
intelligence, 10(06):731–747.
Carlini, N. and Wagner, D. (2017). Towards evaluating the
robustness of neural networks.
Chen, J., Su, M., Shen, S., Xiong, H., and Zheng, H. (2019).
Poba-ga: Perturbation optimized black-box adversar-
ial attacks via genetic algorithm. Computers & Secu-
rity, 85:89–106.
Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C.-J.
(2017). Zoo: Zeroth order optimization based black-
box attacks to deep neural networks without training
substitute models. In Proceedings of the 10th ACM
workshop on artificial intelligence and security, pages
15–26.
ICAART 2024 - 16th International Conference on Agents and Artificial Intelligence
208