
REFERENCES
Abdal, R., Qin, Y., and Wonka, P. (2019). Image2stylegan:
How to embed images into the stylegan latent space?
In Proceedings of the IEEE/CVF International Con-
ference on Computer Vision, pages 4432–4441.
Afifi, M., Brubaker, M. A., and Brown, M. S. (2021). His-
togan: Controlling colors of gan-generated and real
images via color histograms. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 7941–7950.
Arjovsky, M., Chintala, S., and Bottou, L. (2017a). Wasser-
stein generative adversarial networks. In International
Conference on Machine Learning, pages 214–223.
PMLR.
Arjovsky, M., Chintala, S., and Bottou, L. (2017b). Wasser-
stein generative adversarial networks. In Interna-
tional conference on machine learning, pages 214–
223. PMLR.
Avanaki, N. J., Ghildiyal, A., Barman, N., and Zad-
tootaghaj, S. (2024). Lar-iqa: A lightweight, accu-
rate, and robust no-reference image quality assess-
ment model. arXiv preprint arXiv:2408.17057.
Baluja, S. (2017). Hiding images in plain sight: Deep
steganography. Advances in Neural Information Pro-
cessing Systems, 30:2069–2079.
Barni, M., Bartolini, F., and Piva, A. (2001). Im-
proved wavelet-based watermarking through pixel-
wise masking. IEEE Transactions on Image Process-
ing, 10(5):783—791.
BT, R. I.-R. (2002). Methodology for the subjective as-
sessment of the quality of television pictures. Interna-
tional Telecommunication Union.
Chen, M.-J. and Bovik, A. C. (2011). Fast structural sim-
ilarity index algorithm. Journal of Real-Time Image
Processing, 6(4):281–287.
Cunha, T., Schirmer, L., Marcos, J., and Gonc¸alves,
N. (2024). Noise simulation for the improvement
of training deep neural network for printer-proof
steganography. In Proceedings of the 13th Interna-
tional Conference on Pattern Recognition Applica-
tions and Methods, pages 179–186.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-
Fei, L. (2009). Imagenet: A large-scale hierarchical
image database. In 2009 IEEE Conference on Com-
puter Vision and Pattern Recognition, pages 248–255.
IEEE.
Deng, J., Guo, J., Xue, N., and Zafeiriou, S. (2019). Ar-
cface: Additive angular margin loss for deep face
recognition. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition,
pages 4690–4699.
Deng, J., Guo, J., Xue, N., and Zafeiriou, S. (2019). Ar-
cFace: Additive Angular Margin Loss for Deep Face
Recognition. In 2019 IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition (CVPR), pages
4685–4694.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A. C., and Ben-
gio, Y. (2014). Generative adversarial nets. In NIPS.
Hancock, P. (2008). Psychological image collection at stir-
ling (pics). Web address: http://pics. psych. stir. ac.
uk.
Hsu, C.-T. and Wu, J.-L. (1999). Hidden digital watermarks
in images. IEEE Transactions on Image Processing,
8(1):58–68.
Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2017).
Image-to-image translation with conditional adversar-
ial networks. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages
1125–1134.
Jaderberg, M., Simonyan, K., Zisserman, A., et al. (2015).
Spatial transformer networks. In Advances in Nneural
Information Processing Systems, pages 2017–2025.
Jing, J., Deng, X., Xu, M., Wang, J., and Guan, Z. (2021).
Hinet: Deep image hiding by invertible network. In
Proceedings of the IEEE/CVF International Confer-
ence on Computer Vision, pages 4733–4742.
Kettunen, M., H
¨
ark
¨
onen, E., and Lehtinen, J. (2019). E-
lpips: robust perceptual image similarity via ran-
dom transformation ensembles. arXiv preprint
arXiv:1906.03973.
Kingma, D. P. and Ba, J. (2014). Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Liu, Z., Luo, P., Wang, X., and Tang, X. (2018). Large-
scale celebfaces attributes (celeba) dataset. Retrieved
August, 15(2018):11.
Mirza, M. and Osindero, S. (2014). Conditional generative
adversarial nets. arXiv preprint arXiv:1411.1784.
Oktay, O., Schlemper, J., Folgoc, L. L., Lee, M., Heinrich,
M., Misawa, K., Mori, K., McDonagh, S., Hammerla,
N. Y., Kainz, B., et al. (2018). Attention u-net: Learn-
ing where to look for the pancreas. arXiv preprint
arXiv:1804.03999.
O’Ruanaidh, J., Dowling, W., and Boland, F. (1996). Wa-
termarking digital images for copyright protection.
IEE Proceedings-Vision, Image and Signal Process-
ing, 143(4):250–256.
Phillips, P. J., Moon, H., Rizvi, S. A., and Rauss, P. J.
(2000). The feret evaluation methodology for face-
recognition algorithms. IEEE Transactions on pat-
tern analysis and machine intelligence, 22(10):1090–
1104.
Shadmand, F., Medvedev, I., and Gonc¸alves, N. (2021).
Codeface: A deep learning printer-proof steganog-
raphy for face portraits. IEEE Access, 9:167282–
167291.
Shadmand, F., Medvedev, I., Schirmer, L., Marcos, J., and
Gonc¸alves, N. (2024). Stampone: Addressing fre-
quency balance in printer-proof steganography. In
Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition, pages 4367–
4376.
Su, H., Niu, J., Liu, X., Li, Q., Wan, J., Xu, M., and Ren, T.
(2021). Artcoder: An end-to-end method for generat-
ing scanning-robust stylized qr codes. In Proceedings
of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), pages 2277–2286.
StylePuncher: Encoding a Hidden QR Code into Images
307