
Leynes, A. P., Yang, J., Wiesinger, F., Kaushik, S. S.,
Shanbhag, D. D., Seo, Y., Hope, T. A., and Lar-
son, P. E. (2018). Zero-echo-time and Dixon Deep
Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-
CT Images for Pelvic PET/MRI Attenuation Correc-
tion Using Deep Convolutional Neural Networks with
Multiparametric MRI. Journal of Nuclear Medicine,
59(5):852–858.
Lowekamp, B. C., Chen, D. T., Ib
´
a
˜
nez, L., and Blezek, D.
(2013). The Design of SimpleITK. Frontiers in neu-
roinformatics, 7:45.
Mattes, D., Haynor, D. R., Vesselle, H., Lewellen, T. K., and
Eubank, W. (2003). PET-CT Image Registration in the
Chest Using Free-form Deformations. IEEE transac-
tions on medical imaging, 22(1):120–128.
Mirza, M. and Osindero, S. (2014). Conditional Generative
Adversarial Nets. arXiv preprint arXiv:1411.1784.
Nie, D., Cao, X., Gao, Y., Wang, L., and Shen, D. (2016).
Estimating CT Image from MRI Data Using 3D Fully
Convolutional Networks. In Deep Learning and Data
Labeling for Medical Applications: First Interna-
tional Workshop, LABELS 2016, and Second Inter-
national Workshop, DLMIA 2016, Held in Conjunc-
tion with MICCAI 2016, Athens, Greece, October 21,
2016, Proceedings 1, pages 170–178. Springer.
Oktay, O., Schlemper, J., Folgoc, L. L., Lee, M., Hein-
rich, M., Misawa, K., Mori, K., McDonagh, S., Ham-
merla, N. Y., Kainz, B., et al. (2018). Attention U-
Net: Learning Where to Look for the Pancreas. arXiv
preprint arXiv:1804.03999.
Park, J., Woo, S., Lee, J.-Y., and Kweon, I. S. (2018).
BAM: Bottleneck Attention Module. arXiv preprint
arXiv:1807.06514.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
Chanan, G., Killeen, T., Lin, Z., Gimelshein, N.,
Antiga, L., et al. (2019). Pytorch: An imperative
style, high-performance deep learning library. In
Advances in neural information processing systems,
pages 8026–8037.
Paulus, D. H., Quick, H. H., Geppert, C., Fenchel, M.,
Zhan, Y., Hermosillo, G., Faul, D., Boada, F.,
Friedman, K. P., and Koesters, T. (2015). Whole-
body PET/MR Imaging: Quantitative Evaluation
of a Novel Model-based MR Attenuation Correc-
tion Method Including Bone. Journal of Nuclear
Medicine, 56(7):1061–1066.
Qi, M., Li, Y., Wu, A., Jia, Q., Li, B., Sun, W., Dai, Z.,
Lu, X., Zhou, L., Deng, X., et al. (2020). Multi-
sequence MR Image-based Synthetic CT Generation
Using a Generative Adversarial Network for Head
and Neck MRI-only Radiotherapy. Medical physics,
47(4):1880–1894.
Quick, H. H. (2014). Integrated PET/MR. Journal of mag-
netic resonance imaging, 39(2):243–258.
Radford, A., Metz, L., and Chintala, S. (2015). Un-
supervised Representation Learning with Deep Con-
volutional Generative Adversarial Networks. arXiv
preprint arXiv:1511.06434.
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-Net:
Convolutional Networks for Biomedical Image Seg-
mentation. In International Conference on Medical
image computing and computer-assisted intervention,
pages 234–241. Springer.
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V.,
Radford, A., and Chen, X. (2016). Improved Tech-
niques for Training GANs. Advances in neural infor-
mation processing systems, 29.
Schmidt, M. A. and Payne, G. S. (2015). Radiotherapy
Planning Using MRI. Physics in Medicine & Biology,
60(22):R323.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R.,
Parikh, D., and Batra, D. (2017). Grad-CAM: Visual
Explanations from Deep Networks via Gradient-based
Localization. In Proceedings of the IEEE interna-
tional conference on computer vision, pages 618–626.
Torrado-Carvajal, A., Vera-Olmos, J., Izquierdo-Garcia, D.,
Catalano, O. A., Morales, M. A., Margolin, J., Sori-
celli, A., Salvatore, M., Malpica, N., and Catana,
C. (2019). Dixon-VIBE Deep Learning (DIVIDE)
Pseudo-CT Synthesis for Pelvis PET/MR Attenuation
Correction. Journal of nuclear medicine, 60(3):429–
435.
Wang, Y., Liu, C., Zhang, X., and Deng, W. (2019). Syn-
thetic CT Generation Based on T2 Weighted MRI
of Nasopharyngeal Carcinoma (NPC) Using a Deep
Convolutional Neural Network (DCNN). Frontiers in
oncology, 9:1333.
Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P.
(2004). Image Quality Assessment: From Error Vis-
ibility to Structural Similarity. IEEE transactions on
image processing, 13(4):600–612.
West, J., Fitzpatrick, J. M., Wang, M. Y., Dawant, B. M.,
Maurer Jr, C. R., Kessler, R. M., Maciunas, R. J., Bar-
illot, C., Lemoine, D., Collignon, A., et al. (1997).
Comparison and Evaluation of Retrospective Inter-
modality Brain Image Registration Techniques. Jour-
nal of computer assisted tomography, 21(4):554–568.
Wolterink, J. M., Dinkla, A. M., Savenije, M. H., Seevinck,
P. R., van den Berg, C. A., and I
ˇ
sgum, I. (2017). Deep
MR to CT Synthesis Using Unpaired Data. In Interna-
tional Workshop on Simulation and Synthesis in Med-
ical Imaging, pages 14–23. Springer.
Woo, S., Park, J., Lee, J.-Y., and Kweon, I. S. (2018).
CBAM: Convolutional Block Attention Module. In
Proceedings of the European conference on computer
vision (ECCV), pages 3–19.
Wu, J., Huang, Z., Thoma, J., Acharya, D., and Van Gool,
L. (2018). Wasserstein Divergence for GANs. In Pro-
ceedings of the European conference on computer vi-
sion (ECCV), pages 653–668.
Xiang, L., Li, Y., Lin, W., Wang, Q., and Shen, D. (2018).
Unpaired Deep Cross-Modality Synthesis with Fast
Training. In Deep Learning in Medical Image Anal-
ysis and Multimodal Learning for Clinical Decision
Support: 4th International Workshop, DLMIA 2018,
and 8th International Workshop, ML-CDS 2018, Held
in Conjunction with MICCAI 2018, Granada, Spain,
September 20, 2018, Proceedings 4, pages 155–164.
Springer.
Xie, S., Girshick, R., Doll
´
ar, P., Tu, Z., and He, K. (2017).
Aggregated Residual Transformations for Deep Neu-
ral Networks. In Proceedings of the IEEE conference
BIOIMAGING 2024 - 11th International Conference on Bioimaging
234