5 CONCLUSIONS
This paper proposes a Feature Normalisation and
Standardisation unsupervised approach for detecting
adversarial images. This is very useful in real-life sce-
narios where it doesn’t require the attacker’s method
or retraining the model. We provide an experimen-
tal comparison of the iterative adversarial attack algo-
rithms on the X-ray dataset. The results show that our
proposed algorithm accurately determines adversarial
images. This can be extended for other medical im-
age datasets where one can use different models than
GMM to model the extracted features.
REFERENCES
Akhtar, N. and Mian, A. S. (2018). Threat of adversarial
attacks on deep learning in computer vision: A survey.
IEEE Access, 6:14410–14430.
Daniels, Z. A. and Metaxas, D. N. (2019). Exploiting visual
and report-based information for chest x-ray analysis
by jointly learning visual classifiers and topic models.
Finlayson, S. G., Bowers, J., Ito, J., Zittrain, J., Beam, A.,
and Kohane, I. S. (2019). Adversarial attacks on med-
ical machine learning.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Ex-
plaining and harnessing adversarial examples. CoRR,
abs/1412.6572.
He, X., Yang, S., Li, G., Li, H., Chang, H., and Yu, Y.
(2019). Non-local context encoder: Robust biomedi-
cal image segmentation against adversarial attacks. In
AAAI.
Huang, G., Liu, Z., and Weinberger, K. Q. (2017). Densely
connected convolutional networks. 2017 IEEE Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 2261–2269.
Kurakin, A., Goodfellow, I. J., and Bengio, S. (2017).
Adversarial machine learning at scale. ArXiv,
abs/1611.01236.
Li, X. and Zhu, D. (2020). Robust detection of adversarial
attacks on medical images. 2020 IEEE 17th Inter-
national Symposium on Biomedical Imaging (ISBI),
pages 1154–1158.
Liang, H., He, E., Zhao, Y., Jia, Z., and Li, H. (2022). Ad-
versarial attack and defense: A survey. Electronics.
Ma, X., Niu, Y., Gu, L., Wang, Y., Zhao, Y., Bailey, J., and
Lu, F. (2021). Understanding adversarial attacks on
deep learning based medical image analysis systems.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2018). Towards deep learning models re-
sistant to adversarial attacks. ArXiv, abs/1706.06083.
Mohandas, S., Manwani, N., and Dhulipudi, D. P. (2022).
Momentum iterative gradient sign method outper-
forms pgd attacks. In ICAART.
Reda, I., Ayinde, B. O., Elmogy, M. M., Shalaby, A. M.,
El-Melegy, M. T., El-Ghar, M. A., El-Fetouh, A. A.,
Ghazal, M., and El-Baz, A. S. (2018). A new cnn-
based system for early diagnosis of prostate cancer.
Shaffie, A., Soliman, A., Khalifeh, H. A., Ghazal, M.,
Taher, F., Elmaghraby, A. S., Keynton, R. S., and El-
Baz, A. S. (2019). Radiomic-based framework for
early diagnosis of lung cancer.
Shi, X., Peng, Y., Chen, Q., Keenan, T. D. L., Thavikulwat,
A. T., Lee, S., Tang, Y., Chew, E. Y., Summers, R. M.,
and Lu, Z. (2022). Robust convolutional neural net-
works against adversarial attacks on medical images.
Pattern Recognit., 132:108923.
Taghanaki, S. A., Das, A., and Hamarneh, G. (2018).
Vulnerability analysis of chest x-ray image
classification against adversarial attacks. In
MLCN/DLF/iMIMIC@MICCAI.
Xie, C., Wu, Y., van der Maaten, L., Yuille, A. L., and He,
K. (2019). Feature denoising for improving adversar-
ial robustness. 2019 IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition (CVPR), pages
501–509.
Zhang, H. R., Yu, Y., Jiao, J., Xing, E. P., Ghaoui, L. E.,
and Jordan, M. I. (2019). Theoretically principled
trade-off between robustness and accuracy. ArXiv,
abs/1901.08573.
Zheng, Z. and Hong, P. (2018). Robust detection of adver-
sarial attacks by modeling the intrinsic properties of
deep neural networks. In NeurIPS.
Features Normalisation and Standardisation (FNS): An Unsupervised Approach for Detecting Adversarial Attacks for Medical Images
145