
though these differences in resilience are not substan-
tial. A deeper exploration into the factors contribut-
ing to this resilience in the Single model, as well as
the trade-offs between Single and MTD without us-
ing ensemble models in different attack environments,
would provide valuable insights for optimizing de-
fense strategies across various adversarial contexts.
Additionally, future work could involve investi-
gating scenarios where the attacker has enhanced ca-
pabilities. For evasion attacks, this could include cre-
ating adversarial examples based on knowledge of
a greater number of model architectures within the
pool, and for poisoning attacks, it could involve gain-
ing access to compromise a larger number of models.
Although this assumes a level of access and knowl-
edge that is unrealistic in real-world scenarios, explor-
ing these worst-case conditions would allow us to un-
derstand the robustness of different defense strategies
under maximum adversarial pressure, further inform-
ing the development of resilient frameworks.
6 CONCLUSION
In this study, we aimed to assess the effectiveness
of HybridMTD, a novel defense strategy that com-
bines Moving Target Defense with ensemble neural
network models, against a wide range of adversarial
attacks. Our extensive experiments across four dif-
ferent datasets—MNIST (image), Twitter Sentiment
(text), KDD (tabular), and MIT-BIH (signals)—and
seven sophisticated attack types, including both eva-
sion and poisoning attacks, have demonstrated the ro-
bustness and resilience of HybridMTD.
The results indicate that HybridMTD significantly
outperforms the traditional MTD approach and con-
ventional single-model methods, maintaining high ac-
curacy and robustness. By leveraging the dynamic se-
lection of a subset of models from a diverse pool and
employing majority voting, HybridMTD increases
the unpredictability of the defense mechanism, mak-
ing it more challenging for adversaries to execute
their attacks successfully. HybridMTD worked ex-
ceptionally well for poisoning attacks, maintaining
high performance when most models were not com-
promised. For evasion attacks, HybridMTD demon-
strated robust performance, particularly when adver-
sarial examples did not severely degrade the perfor-
mance of most models. Overall, in all scenarios, we
observed a substantial increase in performance, con-
firming HybridMTD’s effectiveness as a comprehen-
sive defense strategy.
REFERENCES
Amich, A. and Eshete, B. (2021). Morphence: Mov-
ing target defense against adversarial examples. In
Annual Computer Security Applications Conference
(ACSAC).
An, Q., Rahman, S., Zhou, J., and Kang, J. J. (2023). A
comprehensive review on machine learning in health-
care industry: classification, restrictions, opportuni-
ties and challenges. SENSORS.
Biggio, B., d’Armi, P., Nelson, B., and Laskov, P. (2013).
Poisoning attacks against support vector machines.
arXiv. arXiv:1206.6389v3.
Biggio, B., Nelson, B., and Laskov, P. (2011). Support vec-
tor machines under adversarial label noise. In Asian
Conference on Machine Learning, PMLR, pages 97–
112.
Carlini, N. and Wagner, D. (2017). Towards evalu-
ating the robustness of neural networks. arXiv.
arXiv:1608.04644v2.
Colbaugh, R. and Glass, K. (2013). Moving target defense
for adaptive adversaries. In ISI.
Dietterich, T. G. (2000). Ensemble methods in machine
learning. In International Workshop on Multiple Clas-
sifier Systems, pages 1–15.
Dong, Y., Deng, Z., Pang, T., Zhu, J., and Su, H. (2020).
Adversarial distributional training for robust deep
learning. In Conference on Neural Information Pro-
cessing Systems.
Gao, R., Cai, T., Li, H., Hsieh, C., Wang, L., and Lee, J. D.
(2019). Convergence of adversarial training in over-
parametrized neural networks. In Conference on Neu-
ral Information Processing Systems.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Ex-
plaining and harnessing adversarial examples. In In-
ternational Conference on Learning Representations
(ICLR).
Kurakin, A., Goodfellow, I. J., and Bengio, S. (2017).
Adversarial examples in the physical world. In In-
ternational Conference on Learning Representations
(ICLR).
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998a).
Gradient-based learning applied to document recogni-
tion. IEEE.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998b).
Gradient-based learning applied to document recogni-
tion. Proceedings of the IEEE, 86(11):2278–2324.
Lei, C., Zhang, H. Q., Tan, J. L., Zhang, Y.-C., and Liu,
X. H. (2018). Moving target defense techniques: A
survey. Security and Communication Networks. Arti-
cle ID 3759626.
Liu, H. (1999). The kdd’99 dataset. The UCI KDD Archive,
University of California, Irvine, CA.
Liu, J., Pong Lau, C., Souri, H., Feizi, S., and Chellappa,
R. (2022). Mutual adversarial training: Learning to-
gether is better than going alone. Transactions on In-
formation Forensics and Security, 17:2364–2377.
Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., and Leung, V.
C. M. (2018). A survey on security threats and de-
ICISSP 2025 - 11th International Conference on Information Systems Security and Privacy
82