curacy is as important as the defense against adver-
saries. The preliminary results pave the way for the
broader research that can be done in this area. The
study aimed to initially evaluate the feasibility of the
framework implementation and then understand the
scope of improvement in the performance. We have
achieved satisfactory results with the initial set of ex-
periments; however, we aim to validate this work in
future with more concrete results. Hence, we propose
some of the possible directions to extend our work:
• Train with powerful models to observe the consid-
erable improvement in accuracy
• Train with different types and variants of attacks
to understand the generalization of the method.
Also, training defense against geometric adver-
saries will be interesting as the ERAN toolbox
supports geometric certification.
• Extending the work to real-life datasets will give
better insights into this framework. Even the sim-
plest of the models already have a good perfor-
mance on simple datasets like MNIST.
• Formulate hyper-parameter tuning to get right ε
for abstract certification to achieve maximum im-
provement in accuracy
ACKNOWLEDGEMENT
We gratefully acknowledge the support from the ad-
vanced research computing platform: Sockeye at The
University of British Columbia for the resource allo-
cation for this study.
REFERENCES
Akhtar, N. and Mian, A. (2018). Threat of adversarial at-
tacks on deep learning in computer vision: A survey.
Ieee Access, 6:14410–14430.
Brosnan, T. and Sun, D.-W. (2004). Improving quality in-
spection of food products by computer vision—-a re-
view. Journal of food engineering, 61(1):3–16.
Cousot, P. and Cousot, R. (1977). Abstract interpretation:
a unified lattice model for static analysis of programs
by construction or approximation of fixpoints. In Pro-
ceedings of the 4th ACM SIGACT-SIGPLAN sympo-
sium on Principles of programming languages, pages
238–252.
Dong, G. and Liu, H. (2018). Feature engineering for ma-
chine learning and data analytics. CRC Press.
Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P.,
Chaudhuri, S., and Vechev, M. (2018). Ai2: Safety
and robustness certification of neural networks with
abstract interpretation. In 2018 IEEE Symposium on
Security and Privacy (SP), pages 3–18. IEEE.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Ex-
plaining and harnessing adversarial examples. arXiv
preprint arXiv:1412.6572.
Kamilaris, A. and Prenafeta-Bold
´
u, F. X. (2018). Deep
learning in agriculture: A survey. Computers and elec-
tronics in agriculture, 147:70–90.
Khan, S., Rahmani, H., Shah, S. A. A., and Bennamoun,
M. (2018). A guide to convolutional neural networks
for computer vision. Synthesis Lectures on Computer
Vision, 8(1):1–207.
Kim, H. (2020). Torchattacks: A pytorch repository for
adversarial attacks. arXiv preprint arXiv:2010.01950.
Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple
layers of features from tiny images.
LeCun, Y. (1998). The mnist database of handwritten digits.
http://yann. lecun. com/exdb/mnist/.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2017). Towards deep learning mod-
els resistant to adversarial attacks. arXiv preprint
arXiv:1706.06083.
Nguyen, A., Yosinski, J., and Clune, J. (2015). Deep neural
networks are easily fooled: High confidence predic-
tions for unrecognizable images. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 427–436.
Otter, D. W., Medina, J. R., and Kalita, J. K. (2020). A sur-
vey of the usages of deep learning for natural language
processing. IEEE Transactions on Neural Networks
and Learning Systems, 32(2):604–624.
Papernot, N., McDaniel, P., and Goodfellow, I. (2016).
Transferability in machine learning: from phenomena
to black-box attacks using adversarial samples. arXiv
preprint arXiv:1605.07277.
Shafahi, A., Najibi, M., Ghiasi, A., Xu, Z., Dickerson, J.,
Studer, C., Davis, L. S., Taylor, G., and Goldstein, T.
(2019). Adversarial training for free! arXiv preprint
arXiv:1904.12843.
Singh, G., Ganvir, R., P
¨
uschel, M., and Vechev, M. (2019a).
Beyond the single neuron convex barrier for neural
network certification.
Singh, G., Gehr, T., Mirman, M., P
¨
uschel, M., and Vechev,
M. T. (2018). Fast and effective robustness certifica-
tion. NeurIPS, 1(4):6.
Singh, G., Gehr, T., P
¨
uschel, M., and Vechev, M. (2019b).
An abstract domain for certifying neural networks.
Proceedings of the ACM on Programming Languages,
3(POPL):1–30.
Singh, G., Mirman, M., Gehr, T., Hoffman, A., Tsankov,
P., Drachsler-Cohen, D., P
¨
uschel, M., and Vechev, M.
(2019c). Eth robustness analyzer for neural networks
(eran).
Voulodimos, A., Doulamis, N., Doulamis, A., and Protopa-
padakis, E. (2018). Deep learning for computer vi-
sion: A brief review. Computational intelligence and
neuroscience, 2018.
Wong, E., Rice, L., and Kolter, J. Z. (2020). Fast is bet-
ter than free: Revisiting adversarial training. arXiv
preprint arXiv:2001.03994.
Xu, H., Ma, Y., Liu, H.-C., Deb, D., Liu, H., Tang, J.-L., and
Jain, A. K. (2020). Adversarial attacks and defenses in
Soft Adversarial Training Can Retain Natural Accuracy
627