Jmila, H. and Khedher, M. I. (2022). Adversarial machine
learning for network intrusion detection: A compara-
tive study. Comput. Networks, 214:109073.
Katz, G., Barrett, C. W., Dill, D. L., Julian, K., and Kochen-
derfer, M. J. (2017). Reluplex: An efficient SMT
solver for verifying deep neural networks. CoRR,
abs/1702.01135.
Katz, G., Huang, D. A., Ibeling, D., Julian, K., Lazarus, C.,
Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljic, A., Dill,
D. L., Kochenderfer, M. J., and Barrett, C. W. (2019).
The marabou framework for verification and analysis
of deep neural networks. In Dillig, I. and Tasiran,
S., editors, Computer Aided Verification - 31st Inter-
national Conference, CAV 2019, New York City, NY,
USA, July 15-18, 2019, Proceedings, Part I, volume
11561 of Lecture Notes in Computer Science, pages
443–452. Springer.
Khedher, M. I., Jmila, H., and El-Yacoubi, M. A. (2023).
On the formal evaluation of the robustness of neural
networks and its pivotal relevance for ai-based safety-
critical domains. International Journal of Network
Dynamics and Intelligence, page 100018.
Khedher, M. I., Mziou-Sallami, M., and Hadji, M. (2021).
Improving decision-making-process for robot naviga-
tion under uncertainty. In International Conference on
Agents and Artificial Intelligence, pages 1105–1113.
Kolter, J. Z. and Wong, E. (2017). Provable defenses against
adversarial examples via the convex outer adversarial
polytope. CoRR, abs/1711.00851.
Kurabin, A., Goodfellow, I. J., and Bengio, S. (2017).
Adversarial examples in the physical world. ICLR,
1607.02533v4.
Lee, H., Han, S., and Lee, J. (2017). Generative adversarial
trainer: Defense to adversarial perturbations with gan.
arXiv preprint arXiv:1705.03387.
Lemesle, A., Chihani, Z., Lehmann, J., and Durand,
S. (2023). PyRAT Analyzer website. https://
pyrat-analyzer.com/. Accessed: December 15th,
2023.
Li, B.-h., Hou, B.-c., Yu, W.-t., Lu, X.-b., and Yang, C.-w.
(2017). Applications of artificial intelligence in intel-
ligent manufacturing: a review. Frontiers of Informa-
tion Technology & Electronic Engineering, 18:86–96.
Lomuscio, A. and Maganti, L. (2017a). An approach to
reachability analysis for feed-forward relu neural net-
works. CoRR, abs/1706.07351.
Lomuscio, A. and Maganti, L. (2017b). An approach to
reachability analysis for feed-forward relu neural net-
works. CoRR, abs/1706.07351.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2017). Towards deep learning mod-
els resistant to adversarial attacks. arXiv preprint
arXiv:1706.06083.
Mattioli, J., Le Roux, X., Braunschweig, B., Cantat, L.,
Tschirhart, F., Robert, B., Gelin, R., and Nicolas, Y.
(2023). Ai engineering to deploy reliable ai in indus-
try. In AI4I.
Miglani, A. and Kumar, N. (2019). Deep learning models
for traffic flow prediction in autonomous vehicles: A
review, solutions, and challenges. Vehicular Commu-
nications, 20:100184.
Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P.
(2015). Deepfool : a simple and accurate method to
fool deep neural networks. CoRR, 1511.04599.
Mwadulo, M. W. (2016). Suitability of agile methods for
safety-critical systems development: a survey of liter-
ature. International Journal of Computer Applications
Technology and Research, 5(7):465–471.
Newcombe, C., Rath, T., Zhang, F., Munteanu, B., Brooker,
M., and Deardeuff, M. (2015). How amazon web ser-
vices uses formal methods. Communications of the
ACM.
Raghunathan, A., Steinhardt, J., and Liang, P. (2018).
Certified defenses against adversarial examples. In
6th International Conference on Learning Represen-
tations, ICLR 2018, Vancouver, BC, Canada, April 30
- May 3, 2018, Conference Track Proceedings. Open-
Review.net.
Tjeng, V., Xiao, K. Y., and Tedrake, R. (2019). Evaluating
robustness of neural networks with mixed integer pro-
gramming. In 7th International Conference on Learn-
ing Representations, ICLR 2019, New Orleans, LA,
USA, May 6-9, 2019. OpenReview.net.
Weng, T., Zhang, H., Chen, H., Song, Z., Hsieh, C.,
Daniel, L., Boning, D. S., and Dhillon, I. S. (2018).
Towards fast computation of certified robustness for
relu networks. In Dy, J. G. and Krause, A., edi-
tors, Proceedings of the 35th International Conference
on Machine Learning, ICML 2018, Stockholmsmäs-
san, Stockholm, Sweden, July 10-15, 2018, volume 80
of Proceedings of Machine Learning Research, pages
5273–5282.
Wong, E. and Kolter, J. Z. (2018). Provable defenses against
adversarial examples via the convex outer adversar-
ial polytope. In Dy, J. G. and Krause, A., editors,
Proceedings of the 35th International Conference on
Machine Learning, ICML 2018, Stockholmsmässan,
Stockholm, Sweden, July 10-15, 2018, volume 80 of
Proceedings of Machine Learning Research, pages
5283–5292. PMLR.
Xiang, W., Tran, H., and Johnson, T. T. (2017a). Out-
put reachable set estimation and verification for multi-
layer neural networks. CoRR, abs/1708.03322.
Xiang, W., Tran, H., and Johnson, T. T. (2017b). Reachable
set computation and safety verification for neural net-
works with relu activations. CoRR, abs/1712.08163.
Yuan, X., He, P., Zhu, Q., and Li, X. (2019). Adversar-
ial examples: Attacks and defenses for deep learning.
IEEE transactions on neural networks and learning
systems, 30(9):2805–2824.
Zantedeschi, V., Nicolae, M.-I., and Rawat, A. (2017).
Efficient defenses against adversarial attacks. In
ACM Workshop on Artificial Intelligence and Security,
pages 39–49.
Zhang, H., Weng, T.-W., Chen, P.-Y., Hsieh, C.-J., and
Daniel, L. (2018). Efficient neural network robust-
ness certification with general activation functions.
Advances in Neural Information Processing Systems,
31:4939–4948.
On the Formal Robustness Evaluation for AI-based Industrial Systems
321