Figure 4: Percentage of Perturbed Inputs (%).
Figure 5: Average Execution Time.
40% of PPI, compared to 60% of PPI when using the
quadratic formulation.
The efficiency and feasibility of our adversarial
attack algorithm, leveraging an Integer Linear Pro-
gramming approach, is depicted in Fig. 5. In other
words, the average execution time does not exceed
1 minute in the worst scenario (Quadratic objective
function with 20 neurons as input). The average time
for searching an adversarial image (for instance) is
linearly increasing for a hidden layer number ranging
from 50 to 500.
6 CONCLUSION
We proposed in this paper a new optimization tech-
nique for adversarial attack process. We considered
in our optimization the integration of new constraints
such as the number of perturbed inputs. Moreover, an
optimal optimization algorithm is proposed and eval-
uated for N and M changes according to predeter-
mined scenarios (linear and quadratic). Performance
evaluation is investigated to confirm that N and M
have significant impact on the average execution time,
TPC, and PPI. Finally, our results show the efficiency
of the linear algorithm compared to the quadratic ap-
proach.
We considered in this paper only feed-forward
neural network. As a future work, we plan to ex-
tend our modelling to other deep learning architec-
tures such as Convolutional Neural Network (CNN)
and Long Short-Term Memory (LSTM) . Moreover,
we plan to validate our proposed approach on real use
cases such as image classification, self-driving, etc.
REFERENCES
Akhtar, N. and Mian, A. (2018). Threat of adversarial at-
tacks on deep learning in computer vision: A survey.
CoRR, abs/1801.00553.
Aung, A. M., Fadila, Y., Gondokaryono, R., and Gonzalez,
L. (2017). Building robust deep neural networks for
road sign detection. CoRR, abs/1712.09327.
Bunel, R., Turkaslan, I., Torr, P. H., Kohli, P., and Kumar,
M. P. (2018). A unified view of piecewise linear neu-
ral network verification. In Proceedings of the 32Nd
International Conference on Neural Information Pro-
cessing Systems, NIPS’18, pages 4795–4804, USA.
Curran Associates Inc.
Cao, Y., Xiao, C., Cyr, B., Zhou, Y., Park, W., Rampazzi,
S., Chen, Q. A., Fu, K., and Mao, Z. M. (2019). Ad-
versarial Sensor Attack on LiDAR-based Perception
in Autonomous Driving. In Proceedings of the 26th
ACM Conference on Computer and Communications
Security (CCS’19), London, UK.
Carlini, N. and Wagner, D. (2018). Audio adversarial ex-
amples: Targeted attacks on speech-to-text. In 2018
IEEE Security and Privacy Workshops (SPW), pages
1–7.
Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A.,
and Mukhopadhyay, D. (2018). Adversarial attacks
and defences: A survey. CoRR, abs/1810.00069.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Ex-
plaining and harnessing adversarial examples. ICLR,
1412.6572v3.
Jmila, H., Khedher, M. I., Blanc, G., and El-Yacoubi, M. A.
(2019). Siamese network based feature learning for
improved intrusion detection. In Gedeon, T., Wong,
K. W., and Lee, M., editors, Neural Information
Processing - 26th International Conference, ICONIP
2019, Sydney, NSW, Australia, December 12-15, 2019,
Proceedings, Part I, volume 11953 of Lecture Notes in
Computer Science, pages 377–389. Springer.
Jmila, H., Khedher, M. I., and El-Yacoubi, M. A. (2017).
Estimating VNF resource requirements using machine
learning techniques. In Liu, D., Xie, S., Li, Y., Zhao,
D., and El-Alfy, E. M., editors, Neural Information
Processing - 24th International Conference, ICONIP
2017, Guangzhou, China, November 14-18, 2017,
Proceedings, Part I, volume 10634 of Lecture Notes
in Computer Science, pages 883–892. Springer.
Mathematical Programming Approach for Adversarial Attack Modelling
349