
practical effectiveness and robustness of our approach
is the subject to future work. All in all, given a robust
watermarking algorithm, we confirm that the water-
marks can be transferred from the GAN training im-
ages to the GAN-generated images. Future work will
include a thorough study of watermark transferability
across various generative network architectures and
extend the study to other domains like video. Further-
more, the embedding of encrypted watermarks will be
studied to address the security aspect.
ACKNOWLEDGEMENTS
This research has been funded in part by the Deutsche
Forschungsgemeinschaft (DFG) through the research
project GENSYNTH under the number 421860227.
REFERENCES
Ansari, A. H. (2011). Generation and storage of large syn-
thetic fingerprint database. M.E. Thesis, Indian Insti-
tute of Science Bangalore.
Bahmani, K., Plesh, R., Johnson, P., Schuckers, S., and
Swyka, T. (2021). High fidelity fingerprint genera-
tion: Quality, uniqueness, and privacy. In Proc. of the
IEEE Int. Conf. on Image Processing (ICIP). IEEE.
Barni, M., P
´
erez-Gonz
´
alez, F., and Tondi, B. (2021). DNN
watermarking: Four challenges and a funeral. In Pro-
ceedings of the 2021 ACM Workshop on Information
Hiding and Multimedia Security, IH&MMSec ’21,
page 189–196, New York, NY, USA. Association for
Computing Machinery.
Bontrager, P., Roy, A., Togelius, J., Memon, N., and Ross,
A. (2018). DeepMasterPrints: Generating master-
prints for dictionary attacks via latent variable evolu-
tion. In Proc. BTAS, pages 1–9.
Bouzaglo, R. and Keller, Y. (2022). Synthesis and recon-
struction of fingerprints using generative adversarial
networks. CoRR, abs/2201.06164.
Cappelli, R. (2004). SFinGe: an approach to synthetic fin-
gerprint generation. In Proc. of the Int. Workshop on
Biometric Technologies.
Chen, H., Rouhani, B. D., Fu, C., Zhao, J., and Koushanfar,
F. (2019). DeepMarks: A secure fingerprinting frame-
work for digital rights management of deep learning
models. In Proceedings of the 2019 on International
Conference on Multimedia Retrieval, ICMR ’19, page
105–113, New York, NY, USA. Association for Com-
puting Machinery.
Edwards, B. (2022). China bans AI-generated media with-
out watermarks. https://arstechnica.com/information-
technology/2022/12/china-bans-ai-generated-media-
without-watermarks/, last check 14.7.2023.
Farou, Z., Mouhoub, N., and Horv
´
ath, T. (2020). Data
generation using gene expression generator. Lecture
Notes in Computer Science (including subseries Lec-
ture Notes in Artificial Intelligence and Lecture Notes
in Bioinformatics), 12490:54–65.
Fei, J., Xia, Z., Tondi, B., and Barni, M. (2022). Supervised
GAN watermarking for intellectual property protec-
tion. In 2022 IEEE International Workshop on Infor-
mation Forensics and Security (WIFS), pages 1–6.
Goodfellow et al., I. (2014). Generative adversarial nets. In
Ghahramani et al., Z., editor, Advances in Neural In-
formation Processing Systems (NIPS’14), volume 27,
pages 2672–2680. Curran Associates, Inc.
guofei (2022). Blind watermark based on DWT-DCT-SVD.
https://github.com/guofei9987/blind watermark, last
check 14.7.2023.
Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2017).
Image-to-image translation with conditional adversar-
ial networks. In Proc. CVPR.
Kang, X., Zhao, F., Lin, G., and Chen, Y. (2018). A novel
hybrid of DCT and SVD in DWT domain for robust
and invisible blind image watermarking with optimal
embedding strength. Multimedia Tools and Applica-
tions, 77:13197–13224.
Karras, T., Aila, T., Laine, S., and Lehtinen, J. (2018). Pro-
gressive growing of GANs for improved quality, sta-
bility, and variation. In Proc. of the International Con-
ference on Learning Representations (ICLR).
Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J.,
and Aila, T. (2020). Training generative adversarial
networks with limited data. CoRR, abs/2006.06676.
Kumar, C., Singh, A. K., and Kumar, P. (2018). A recent
survey on image watermarking techniques and its ap-
plication in e-governance. Multimedia Tools and Ap-
plications, 77:3597–3622.
Makrushin, A., Kauba, C., Kirchgasser, S., Seidlitz, S.,
Kraetzer, C., Uhl, A., and Dittmann, J. (2021a). Gen-
eral requirements on synthetic fingerprint images for
biometric authentication and forensic investigations.
In Proc. IH&MMSec’21, page 93–104. ACM.
Makrushin, A., Mannam, V. S., and Dittmann, J. (2023).
Data-driven fingerprint reconstruction from minutiae
based on real and synthetic training data. In Proc.
VISIGRAPP 2023 - Volume 4: VISAPP, pages 229–
237.
Makrushin, A., Trebeljahr, M., Seidlitz, S., and Dittmann,
J. (2021b). On feasibility of GAN-based fingerprint
morphing. In Proc. of the IEEE Int. Workshop on Mul-
timedia Signal Processing (MMSP), pages 1–6.
Marra et al., F. (2019). Do GANs leave artificial fin-
gerprints? In Proc. of the 2019 IEEE Conference
on Multimedia Information Processing and Retrieval
(MIPR), pages 506–511. IEEE.
Neurotechnology (2023). Neurotechnology Verifinger
SDK. https://www.neurotechnology.com/verifinger
.html, last check 14.7.2023.
NIST (2023). NIST Fingerprint Image Qual-
ity (NFIQ) 2. https://www.nist.gov/services-
resources/software/nfiq-2, last check 14.7.2023.
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-
Net: Convolutional networks for biomedical image
segmentation. CoRR, abs/1505.04597.
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
444