All that is left to do, is focus on studying and re-
searching this approach. It could be a vital solution
to the emerging adversarial attacks’ danger. Taking a
note from the way we dealt with computer viruses and
applying it to the adversarial defense problem, could
be all we ever needed in order to address it. After all,
both computer viruses and adversarial attacks share
the same goal: To invade a system and cause malfunc-
tion(s), based on the attacker’s aim and purpose. In a
way, adversarial attacks are viruses. They can be seen
as viruses of deep neural networks. Then, why not
try to detect them using the same pattern as the one in
the computer viruses’ case? It only seems rational to
test this approach. The Hoplite Antivirus could be a
practical solution to our problem!
4 CONCLUSIONS
This paper briefly presented the most well-known ad-
versarial attacks and defenses to date. It extracted
three main issues that remain unresolved, regarding
the existing adversarial defenses’ efficiency. It then
proposed a potential solution, the Hoplite Antivirus
approach. It is based on the same pattern found in the
majority of antivirus software frameworks for com-
puter viruses. The Hoplite Antivirus shall contain a
series of pre-trained DNNs, on which multiple de-
fense strategies will be already implemented. The
DNNs shall constantly be re-trained with newer ad-
versarial examples, in order to be up-to-date with the
evolving attacks. These deep neural networks will be
made available to the public as secure software pack-
ages, ready for use by consumers / machine learning
engineers. Such an approach could end up being vi-
tal towards the (yet) unresolved problem of making
neural networks fully robust and attack-proof.
Study and research are yet to take place, but this
proposal serves as a very good starting point and
guide. The team behind Hoplite shall soon initiate
a full-scale research and testing phase on this pro-
posal. Using high-end physical machines, capable of
performing the resource and time greedy tasks of big
dataset processing, constant (for long time periods)
DNN training & defense techniques applying, the
team aims to monitor the progress and publish the re-
sults of each research / testing stage. A careful selec-
tion of (different kinds of) datasets will be made first.
These datasets will be pre-processed and cleaned.
Then, studied DNN architectures will be matched to
the, now train-ready, datasets. After these steps are
completed, multiple combinations of adversarial de-
fenses will be applied to the DNN-dataset sets. This
will be the process of making the DNNs really ro-
bust, no matter how long and resource-consuming it
might be. The final phase shall include the transfor-
mation of the DNNs to encrypted & secure packages,
ready for distribution through the net. A better under-
standing of Hoplite proposal’s potential (based on the
team’s work) is expected during 2022. The main goal
is set, and that is to find out if the Hoplite Antivirus
approach can indeed be the ultimate solution against
adversarial attacks.
ACKNOWLEDGEMENTS
The research leading to these results has received
funding from the European Commission under the
H2020 Programme’s project ”DataPorts” (Grant
Agreement No. 871493).
REFERENCES
Athalye, A., Engstrom, L., Ilyas, A., and Kwok, K. (2018).
Synthesizing robust adversarial examples. In Inter-
national conference on machine learning, pages 284–
293. PMLR.
Carlini, N. and Wagner, D. (2017). Towards evaluating the
robustness of neural networks. In 2017 ieee sympo-
sium on security and privacy (sp), pages 39–57. IEEE.
Chakraborty, A., Alam, M., Dey, V., Chattopadhyay,
A., and Mukhopadhyay, D. (2018). Adversarial
attacks and defences: A survey. arXiv preprint
arXiv:1810.00069.
Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C.-J.
(2017). Zoo: Zeroth order optimization based black-
box attacks to deep neural networks without training
substitute models. In Proceedings of the 10th ACM
workshop on artificial intelligence and security, pages
15–26.
Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A.,
Xiao, C., Prakash, A., Kohno, T., and Song, D. (2018).
Robust physical-world attacks on deep learning visual
classification. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages
1625–1634.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Ex-
plaining and harnessing adversarial examples. arXiv
preprint arXiv:1412.6572.
Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling
the knowledge in a neural network. arXiv preprint
arXiv:1503.02531.
Katz, G., Barrett, C., Dill, D. L., Julian, K., and Kochender-
fer, M. J. (2017). Reluplex: An efficient smt solver for
verifying deep neural networks. In International Con-
ference on Computer Aided Verification, pages 97–
117. Springer.
Kurakin, A., Goodfellow, I., and Bengio, S. (2016a). Ad-
versarial machine learning at scale. arXiv preprint
arXiv:1611.01236.
Hoplite Antivirus for Adversarial Attacks: A Theoretical Approach
591