diffusion model. By overcoming challenges
associated with medical image translation, we
explore an approach that combines concepts from
generative adversarial networks (GANs) and
diffusion models. The obtained results, evaluated
through metrics such as PSRN and SSIM, showcase
the remarkable capability of the model in generating
contrast-enhanced cardiac images while preserving
quality and visual similarity. However, the analysis of
RMSE indicates persistent challenges, suggesting the
presence of variations that require a deeper
understanding to enhance the consistency and fidelity
of the generated images.
In conclusion, the developed model delivers
notable results, but the study acknowledges the need
for continuous improvements to address variations in
the generated images. The intersection of GANs and
diffusion models proves promising, pointing towards
future research and developments in medical image
translation and significantly contributing to
advancing this crucial area in clinical practice.
ACKNOWLEDGEMENTS
The authors acknowledge the Coordenação de
Aperfeiçoamento de Pessoal de Nível Superior
(CAPES), Brazil - Finance Code 001, Conselho
Nacional de Desenvolvimento Científico e
Tecnológico (CNPq), Brazil, and Fundação de
Amparo à Pesquisa Desenvolvimento Científico e
Tecnológico do Maranhão (FAPEMA) (Brazil),
Empresa Brasileira de Serviços Hospitalares (Ebserh)
Brazil (Grant number 409593/2021-4), and the
Portuguese funding agency, FCT - Fundação para a
Ciência e a Tecnologia, within project
UIDB/50014/2020.DOI.10.54499/UIDB/50014/202
0 | https://doi.org/10.54499/uidb/50014/2020 for the
financial support.
REFERENCES
Azarfar, G., Ko, SB., Adams, S.J. et al. (2023) Applications
of deep learning to reduce the need for iodinated
contrast media for CT imaging: a systematic review. Int
J CARS 18, 1903–1914. https://doi.org/10.1007/s115
48-023-02862-w
Choi, J.W., Cho, Y.J., Ha, J.Y. et al. (2021) Generating
synthetic contrast enhancement from non-contrast chest
computed tomography using a generative adversarial
network. Sci Rep 11, 20403.
https://doi.org/10.1038/s41598-021-00058-3
Chun, J., Chang, J. S., Oh, C., Park, I., Choi, M. S., Hong,
C. S., ... & Kim, J. S. (2022). Synthetic contrast-
enhanced computed tomography generation using a
deep convolutional neural network for cardiac
substructure delineation in breast cancer radiation
therapy: a feasibility study. Radiation Oncology, 17(1),
1-9.
Corballis, N.; Tsampasian, V.; Merinopoulis, I.;
Gunawardena, T.; Bhalraam, U.; Eccleshall, S.; Dweck,
M.R.; Vassiliou, V. CT.(2023) angiography compared
to invasive angiography for stable coronary disease as
predictors of major adverse cardiovascular events—A
systematic review and meta-analysis. Heart Lung, 57,
207–213.
Counseller, Q. and Aboelkassem, Y. (2023) Recent
technologies in cardiac imaging, Frontiers in Medical
Technology, Vol. 4, DOI 10.3389/fmedt.2022.984492,
Croitoru, F. A., Hondru, V., Ionescu, R. T., & Shah, M.
(2023). Diffusion Models in Vision: A Survey, in IEEE
Transactions on Pattern Analysis and Machine
Intelligence, vol. 45, no. 9, pp. 10850-10869.
Dondi M, Paez D, Raggi P, Shaw LJ, Vannan
M.(2021).Integrated non-invasive cardiovascular
imaging: a guide for the practitioner. International
Atomic Energy Agency.
Domingues, R. A. D. (2022). Automatic contrast generation
from contrastless CTs, Master Thesis, Universidade do
Porto, FCUP - Faculdade de Ciências.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2020).
Generative adversarial networks. Communications of
the ACM, 63(11), 139-144.
H., & Kwak, S. (2021). Neural contrast enhancement of CT
image. In Proceedings of the IEEE/CVF Winter
Conference on Applications of Computer Vision (pp.
3973-3982).
Henry, J., Natalie, T., & Madsen, D. (2021). Pix2Pix GAN
for Image-to-Image Translation. Research Gate
Publication, 1-5.
Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion
probabilistic models. Advances in neural information
processing systems. Editors: H. Larochelle and M.
Ranzato and R. Hadsell and M.F. Balcan and H. Lin},
Vol. 33, pages 6840--685}, Curran Associates, Inc.},
https://proceedings.neurips.cc/paper_files/paper/2020
/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf
Hu, T., Oda, M., Hayashi, Y., Lu, Z., Kumamaru, K. K.,
Akashi, T., ... & Mori, K. (2022). Aorta-aware GAN for
non-contrast to artery contrasted CT translation and its
application to abdominal aortic aneurysm detection.
International Journal of Computer Assisted Radiology
and Surgery, 1-9. https://doi.org/10.1007/s11548-021-
02492-0
Jing, Y., Yang, Y., Feng, Z., Ye, J., Yu, Y., & Song, M.
(2019). Neural style transfer: A review. IEEE
transactions on visualization and computer graphics,
26(11), 3365-3385.
Karras, T., Aittala, M., Aila, T., & Laine, S. (2022).
Elucidating the design space of diffusion-based
generative models. Advances in Neural Information
Processing Systems, 35, 26565-26577.