REFERENCES
Zhang, C., Wang, J., Zhou, Q., Xu, T., Tang, K., Gui, H., &
Liu, F. (2022). A Survey of Automatic Source Code
Summarization. Symmetry, 14(3), 471. https://doi.org/
10.3390/sym14030471
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017).
Attention is all you need. Advances in Neural Inf. Pro-
cessing Systems, 30. https://doi.org/10.5555/
3295222.3295349
Kitchenham, B., & Charters, S. (2007). Guidelines for per-
forming systematic literature reviews in software engi-
neering (V.2.3). EBSE Technical Report EBSE-2007-
01, Software Engineering Group, School of Computer
Science and Mathematics, Keele University, and De-
partment of Computer Science, University of Durham.
Wohlin, C. (2014). Guidelines for snowballing in system-
atic literature studies and a replication in software engi-
neering. In Proc. of the 18th Int. Conf. on Evaluation
and Assessment in Software Engineering (EASE),
ACM. https://doi.org/10.1145/2601248.2601268
Yang, A., Liu, K., Liu, J., Lyu, Y., & Li, S. (2018). Adap-
tations of ROUGE and BLEU to better evaluate ma-
chine reading comprehension task. In Proc. of the
Workshop on Machine Reading for Question Answer-
ing (pp. 98–104). Association for Computational Lin-
guistics. https://doi.org/10.18653/v1/W18-2611
Blagec, K., Dorffner, G., Moradi, M., Ott, S., & Samwald,
M. (2022). A global analysis of metrics used for meas-
uring performance in natural language processing.
arXiv. https://arxiv.org/abs/2204.11574
Feng, Z., Guo, D., Tang, D., Duan, N., Feng, X., Gong, M.,
Shou, L., Qin, B., Liu, T., Jiang, D., & Zhou, M. (2020).
CodeBERT: A pre-trained model for programming and
natural languages. https://arxiv.org/abs/2002.08155
Khan, J. Y., & Uddin, G. (2022). Automatic code documen-
tation generation using GPT-3. In Proceedings of the
37th IEEE/ACM International Conference on Auto-
mated Software Engineering (ASE) (pp. 124–135).
IEEE. https://doi.org/10.1145/3551349.3559548
Hu, X., Li, G., Xia, X., Lo, D., & Jin, Z. (2018). Deep code
comment generation. In Proceedings of the 26th Inter-
national Conference on Program Comprehension
(ICPC) (pp. 200–210). Association for Computing Ma-
chinery. https://doi.org/10.1145/3196321.3196334
Wan, Y., Zhao, Z., Yang, M., Xu, G., Ying, H., Wu, J., &
Yu, P. S. (2018). Improving automatic source code
summarization via deep reinforcement learning. In
Proc. of the 33rd ACM/IEEE Int. Conf. on Automated
Software Engineering (ASE) (pp. 397–407). ACM
https://doi.org/10.1145/3238147.3238206
Ahmad, W., Chakraborty, S., Ray, B., & Chang, K.-W.
(2020). A transformer-based approach for source code
summarization. In Proc. of the 58th Annual Meeting of
the Association for Computational Linguistics (pp.
4998–5007). Association for Computational Linguis-
tics. https://doi.org/10.18653/v1/2020.acl-main.449
Allamanis, M., Brockschmidt, M., & Khademi, M. (2018).
Learning to represent programs with graphs. In Inter-
national Conference on Learning Representations
(ICLR). https://arxiv.org/abs/1711.00740
Zhang, M., Zhou, G., Yu, W., Huang, N., & Liu, W. (2023).
GA-SCS: Graph-augmented source code summariza-
tion. ACM Transactions on Asian and Low-Resource
Language Information Processing, 22(2), Article 20.
https://doi.org/10.1145/3554820
LeClair, A., Haque, S., Wu, L., & McMillan, C. (2020). Im-
proved code summarization via a graph neural network.
In
Proceedings of the 28th International Conference on
Program Comprehension (ICPC) (pp. 184–195). ACM
https://doi.org/10.1145/3387904.3389268
Guo, D., Ren, S., Lu, S., Feng, Z., Tang, D., Liu, S., Zhou,
L., Duan, N., Svyatkovskiy, A., Fu, S., Tufano, M.,
Deng, S. K., Clement, C., Drain, D., Sundaresan, N.,
Yin, J., Jiang, D., & Zhou, M. (2021). Graph-
CodeBERT: Pre-training code representations with data
flow. In Int. Conference on Learning Representations
(ICLR). https://arxiv.org/abs/2009.08366
Wang, W., Zhang, Y., Sui, Y., Wan, Y., Zhao, Z., Wu, J.,
Yu, P. S., & Xu, G. (2022). Reinforcement-Learning-
Guided Source Code Summarization Using Hierar-
chical Attention. IEEE Transactions on Software Engi-
neering, 48(1), p.102–119. https://doi.org/10.1109/
tse.2020.2979701
Hu, X., Li, G., Xia, X., Lo, D., & Jin, Z. (2020). Deep code
comment generation with hybrid lexical and syntactical
information. Empirical Software Engineering, 25(3),
pp.2179–2217. https://doi.org/10.1007/s10664-019-
09730-9
Parvez, M. R., Ahmad, W. U., Chakraborty, S., Ray, B., &
Chang, K. W. (2021, August 26). Retrieval Augmented
Code Generation and Summarization. arXiv.org.
https://arxiv.org/abs/2108.11601
Lu, X., & Niu, J. (2023). Enhancing source code summari-
zation from structure and semantics. In Proc. of the
2023 Int. Joint Conf. on Neural Networks. IEEE.
https://doi.org/10.1109/ijcnn54540.2023.10191872
Phan, L., Tran, H., Le, D., Nguyen, H., Anibal, J., Peltekian,
A., & Ye, Y. (2021). CoTexT: Multi-task Learning with
Code-Text Transformer. In Proc. of the 1st Workshop
on Natural Language Processing for Programming (pp.
40–47). Assoc. for Computational Linguistics. https://
doi.org/10.18653/v1/2021.nlp4prog-1.5
Stapleton, S., Gambhir, Y., LeClair, A., Eberhart, Z., Wei-
mer, W., & Leach, K. (2020). A human study of com-
prehension and code summarization. In Proceedings of
the 28th Int. Conference on Program Comprehension
(ICPC) (pp. 2–13). Association for Computing Machin-
ery. https://doi.org/10.1145/3387904.3389258
Due to space limitations, the list of the 58 articles
used in this literature review is provided online at:
https://gkakaron.users.uth.gr/files/EN-
ASE_2025_SLR_ARTICLES.pdf