
Antsiferova, A., Abud, K., Gushchin, A., Shumitskaya, E.,
Lavrushkin, S., and Vatolin, D. (2024). Comparing the
robustness of modern no-reference image- and video-
quality metrics to adversarial attacks. In Proceedings
of the AAAI Conference on Artificial Intelligence, vol-
ume 38, pages 700–708.
Bonnet, B., Furon, T., and Bas, P. (2020). Fooling an
automatic image quality estimator. In MediaEval
2020-MediaEval Benchmarking Intiative for Multime-
dia Evaluation, pages 1–4.
Chen, K., Wei, Z., Chen, J., Wu, Z., and Jiang, Y.-G. (2023).
Gcma: Generative cross-modal transferable adversar-
ial attacks from images to videos. In Proceedings of
the 31st ACM International Conference on Multime-
dia, pages 698–708.
Chen, M.-J. and Bovik, A. C. (2011). No-reference image
blur assessment using multiscale gradient. EURASIP
Journal on image and video processing, 2011:1–11.
Deng, W., Yang, C., Huang, K., Liu, Y., Gui, W., and Luo,
J. (2024). Sparse adversarial video attack based on
dual-branch neural network on industrial artificial in-
telligence of things. IEEE Transactions on Industrial
Informatics.
Dong, Y., Pang, T., Su, H., and Zhu, J. (2019). Evad-
ing defenses to transferable adversarial examples by
translation-invariant attacks. In Proceedings of the
IEEE/CVF conference on computer vision and pattern
recognition, pages 4312–4321.
Fang, Y., Zhu, H., Zeng, Y., Ma, K., and Wang, Z. (2020).
Perceptual quality assessment of smartphone photog-
raphy. In Proceedings of the IEEE/CVF conference on
computer vision and pattern recognition, pages 3677–
3686.
Hosu, V., Hahn, F., Jenadeleh, M., Lin, H., Men, H.,
Szir
´
anyi, T., Li, S., and Saupe, D. (2020). The kon-
stanz natural video database.
Huang, Q., Katsman, I., He, H., Gu, Z., Belongie, S., and
Lim, S.-N. (2019). Enhancing adversarial example
transferability with an intermediate level attack. In
Proceedings of the IEEE/CVF international confer-
ence on computer vision, pages 4733–4742.
Kashkarov, E., Chistov, E., Molodetskikh, I., and Vatolin,
D. (2024). Can no-reference quality-assessment meth-
ods serve as perceptual losses for super-resolution?
arXiv preprint arXiv:2405.20392.
Konstantinov, D., Lavrushkin, S., and Vatolin, D. (2024).
Image robustness to adversarial attacks on no-
reference image-quality metrics. In 2024 32nd Eu-
ropean Signal Processing Conference (EUSIPCO),
pages 611–615. IEEE.
Korhonen, J. and You, J. (2022). Adversarial attacks against
blind image quality assessment models. In Proceed-
ings of the 2nd Workshop on Quality of Experience in
Visual Multimedia Applications, pages 3–11.
Leonenkova, V., Shumitskaya, E., Antsiferova, A., and Va-
tolin, D. (2024). Ti-patch: Tiled physical adversarial
patch for no-reference video quality metrics. arXiv
preprint arXiv:2404.09961.
Li, D., Jiang, T., and Jiang, M. (2019). Quality assessment
of in-the-wild videos. In Proceedings of the 27th ACM
international conference on multimedia, pages 2351–
2359.
Li, D., Jiang, T., and Jiang, M. (2021). Unified quality
assessment of in-the-wild videos with mixed datasets
training. International Journal of Computer Vision,
129(4):1238–1257.
Lin, J., Song, C., He, K., Wang, L., and Hopcroft, J. E.
(2019). Nesterov accelerated gradient and scale
invariance for adversarial attacks. arXiv preprint
arXiv:1908.06281.
Lu, Y., Jia, Y., Wang, J., Li, B., Chai, W., Carin, L., and
Velipasalar, S. (2020). Enhancing cross-task black-
box transferability of adversarial examples with dis-
persion reduction. In Proceedings of the IEEE/CVF
conference on Computer Vision and Pattern Recogni-
tion, pages 940–949.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and
Vladu, A. (2018). Towards deep learning models re-
sistant to adversarial attacks. In International Confer-
ence on Learning Representations.
MediaEval (2020). Pixel privacy: Quality camouflage
for social images. https://multimediaeval.github.io/
editions/2020/tasks/pixelprivacy/.
Meftah, H. F. B., Fezza, S. A., Hamidouche, W., and
D
´
eforges, O. (2023). Evaluating the vulnerability of
deep learning-based image quality assessment meth-
ods to adversarial attacks. In 2023 11th European
Workshop on Visual Information Processing (EUVIP),
pages 1–6. IEEE.
Papernot, N., McDaniel, P., and Goodfellow, I. (2016).
Transferability in machine learning: from phenomena
to black-box attacks using adversarial samples. arXiv
preprint arXiv:1605.07277.
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G.,
Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark,
J., et al. (2021). Learning transferable visual models
from natural language supervision. In International
conference on machine learning, pages 8748–8763.
PMLR.
Ran, Y., Zhang, A.-X., Li, M., Tang, W., and Wang, Y.-G.
(2025). Black-box adversarial attacks against image
quality assessment models. Expert Systems with Ap-
plications, 260:125415.
Shumitskaya, E., Antsiferova, A., and Vatolin, D. (2024a).
Towards adversarial robustness verification of no-
reference image- and video-quality metrics. Computer
Vision and Image Understanding, 240:103913.
Shumitskaya, E., Antsiferova, A., and Vatolin, D. S.
(2022). Universal perturbation attack on differentiable
no-reference image- and video-quality metrics. In
33rd British Machine Vision Conference 2022, BMVC
2022, London, UK, November 21-24, 2022. BMVA
Press.
Shumitskaya, E., Antsiferova, A., and Vatolin, D. S.
(2024b). IOI: Invisible one-iteration adversarial attack
on no-reference image- and video-quality metrics. In
Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A.,
Oliver, N., Scarlett, J., and Berkenkamp, F., edi-
tors, Proceedings of the 41st International Conference
on Machine Learning, volume 235 of Proceedings
Cross-Modal Transferable Image-to-Video Attack on Video Quality Metrics
887