the worst case) augmentation of execution time which
proves the feasibility of the algorithm in complex sce-
narios.
6 CONCLUSION AND
PERSPECTIVES
In this paper, we examined the safety of Neural Net-
work against input perturbations i.e in an uncertain
environment. Our challenge was to verify neural net-
work output according to input range and provide a
formal guarantees about its behavior. Hence, our con-
tribution to the formulation of the verification prob-
lem is based on linear programming technique. We
proposed an exact mathematical formulation and then
eliminated the non-linearities by encoding them with
the help of binary variables. In the numerical eval-
uation, different scenarios are discussed, and results
show that our approach is feasible, in terms of con-
vergence time, and scalable even for large neural net-
works.
Our approach considered only neural network
with ReLU activation functions. In future work, we
plan to extend our study to other activation functions,
such as Tanh, Sigmo
¨
ıde, etc. Moreover, we plan to
validate our proposed approach on real use cases such
as image classification, self driving, etc.
REFERENCES
Bunel, R., Turkaslan, I., Torr, P. H., Kohli, P., and Kumar,
M. P. (2018). A unified view of piecewise linear neu-
ral network verification. In Proceedings of the 32Nd
International Conference on Neural Information Pro-
cessing Systems, NIPS’18, pages 4795–4804, USA.
Curran Associates Inc.
Carlini, N. and Wagner, D. (2018). Audio adversarial ex-
amples: Targeted attacks on speech-to-text. In 2018
IEEE Security and Privacy Workshops (SPW), pages
1–7.
Dvijotham, K., Stanforth, R., Gowal, S., Mann, T. A., and
Kohli, P. (2018). A dual approach to scalable verifica-
tion of deep networks. In Proceedings of the Thirty-
Fourth Conference on Uncertainty in Artificial Intelli-
gence, UAI 2018, Monterey, California, USA, August
6-10, 2018, pages 550–559.
Ehlers, R. (2017). Formal verification of piece-
wise linear feed-forward neural networks. CoRR,
abs/1705.01320.
Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P.,
Chaudhuri, S., and Vechev, M. (2018). Ai 2: Safety
and robustness certification of neural networks with
abstract interpretation. In Security and Privacy (SP),
2018 IEEE Symposium on.
Jmila, H., Khedher, M. I., Blanc, G., and El-Yacoubi, M. A.
(2019). Siamese network based feature learning for
improved intrusion detection. In Gedeon, T., Wong,
K. W., and Lee, M., editors, Neural Information
Processing - 26th International Conference, ICONIP
2019, Sydney, NSW, Australia, December 12-15, 2019,
Proceedings, Part I, volume 11953 of Lecture Notes in
Computer Science, pages 377–389. Springer.
Katz, G., Barrett, C. W., Dill, D. L., Julian, K., and Kochen-
derfer, M. J. (2017). Reluplex: An efficient SMT
solver for verifying deep neural networks. In Com-
puter Aided Verification - 29th International Confer-
ence, CAV 2017, Heidelberg, Germany, July 24-28,
2017, Proceedings, Part I, pages 97–117.
Khedher, M. I., Jmila, H., and Yacoubi, M. A. E. (2018).
Fusion of interest point/image based descriptors for
efficient person re-identification. In 2018 Interna-
tional Joint Conference on Neural Networks (IJCNN),
pages 1–7.
Lomuscio, A. and Maganti, L. (2017). An approach to
reachability analysis for feed-forward relu neural net-
works. CoRR, abs/1706.07351.
Raghunathan, A., Steinhardt, J., and Liang, P. (2018).
Certified defenses against adversarial examples. In
6th International Conference on Learning Represen-
tations, ICLR 2018, Vancouver, BC, Canada, April 30
- May 3, 2018, Conference Track Proceedings. Open-
Review.net.
Tjeng, V., Xiao, K. Y., and Tedrake, R. (2019). Evaluating
robustness of neural networks with mixed integer pro-
gramming. In 7th International Conference on Learn-
ing Representations, ICLR 2019, New Orleans, LA,
USA, May 6-9, 2019. OpenReview.net.
Wong, E. and Kolter, J. Z. (2018). Provable defenses against
adversarial examples via the convex outer adversar-
ial polytope. In Dy, J. G. and Krause, A., editors,
Proceedings of the 35th International Conference on
Machine Learning, ICML 2018, Stockholmsm
¨
assan,
Stockholm, Sweden, July 10-15, 2018, volume 80 of
Proceedings of Machine Learning Research, pages
5283–5292. PMLR.
Xiang, W., Tran, H., and Johnson, T. T. (2018). Output
reachable set estimation and verification for multi-
layer neural networks. IEEE Transactions on Neural
Networks and Learning Systems, 29(11):5777–5783.
Xiang, W., Tran, H.-D., and Johnson, T. T. (2018). Reach-
able set computation and safety verification for neural
networks with relu activations. In Submission.
ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence
1130