
for data-to-text tasks. In Davis, B., Graham, Y., Kelleher,
J., and Sripada, Y., editors, Proceedings of the 13th Inter-
national Conference on Natural Language Generation,
pages 97–102, Dublin, Ireland. Association for Compu-
tational Linguistics.
Kipf, T. N. and Welling, M. (2017). Semi-supervised classi-
fication with graph convolutional networks. In 5th Inter-
national Conference on Learning Representations, ICLR
2017, Toulon, France, April 24-26, 2017, Conference
Track Proceedings.
Krishna, K., Song, Y., Karpinska, M., Wieting, J. F., and
Iyyer, M. (2023). Paraphrasing evades detectors of AI-
generated text, but retrieval is an effective defense. In
Thirty-seventh Conference on Neural Information Pro-
cessing Systems.
Kumar, A., Ahuja, K., Vadapalli, R., and Talukdar, P.
(2020). Syntax-guided controlled generation of para-
phrases. Transactions of the Association for Computa-
tional Linguistics, 8:329–345.
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mo-
hamed, A., Levy, O., Stoyanov, V., and Zettlemoyer,
L. (2020). BART: Denoising sequence-to-sequence pre-
training for natural language generation, translation, and
comprehension. In Proceedings of the 58th Annual
Meeting of the Association for Computational Linguis-
tics, pages 7871–7880, Online. Association for Compu-
tational Linguistics.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,
Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V.
(2020). Roberta: A robustly optimized bert pretraining
approach. arXiv e-prints.
Loshchilov, I. and Hutter, F. (2019). Decoupled weight
decay regularization. In International Conference on
Learning Representations.
Meng, Y., Ao, X., He, Q., Sun, X., Han, Q., Wu, F., fan, C.,
and Li, J. (2021). Conrpg: Paraphrase generation using
contexts as regularizer.
Nan, L., Radev, D., Zhang, R., Rau, A., Sivaprasad, A.,
Hsieh, C., Tang, X., Vyas, A., Verma, N., Krishna, P.,
Liu, Y., Irwanto, N., Pan, J., Rahman, F., Zaidi, A., Mu-
tuma, M., Tarabar, Y., Gupta, A., Yu, T., Tan, Y. C., Lin,
X. V., Xiong, C., Socher, R., and Rajani, N. F. (2021).
DART: Open-domain structured data record to text gen-
eration. In Proceedings of the 2021 Conference of the
North American Chapter of the Association for Com-
putational Linguistics: Human Language Technologies,
pages 432–447, Online. Association for Computational
Linguistics.
Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002).
Bleu: a method for automatic evaluation of machine
translation. In Proceedings of the 40th Annual Meeting
of the Association for Computational Linguistics, pages
311–318, Philadelphia, Pennsylvania, USA. Association
for Computational Linguistics.
Park, S., Hwang, S.-w., Chen, F., Choo, J., Ha, J.-W., Kim,
S., and Yim, J. (2019). Paraphrase diversification using
counterfactual debiasing. Proceedings of the AAAI Con-
ference on Artificial Intelligence, 33(01):6883–6891.
Qian, L., Qiu, L., Zhang, W., Jiang, X., and Yu, Y. (2019).
Exploring diverse expressions for paraphrase genera-
tion. In Proceedings of the 2019 Conference on Empir-
ical Methods in Natural Language Processing and the
9th International Joint Conference on Natural Language
Processing (EMNLP-IJCNLP), pages 3173–3182, Hong
Kong, China. Association for Computational Linguistics.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and
Sutskever, I. (2019). Language models are unsupervised
multitask learners. In Technical report, OpenAI.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S.,
Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2019). Ex-
ploring the limits of transfer learning with a unified text-
to-text transformer. arXiv e-prints.
Reimers, N. and Gurevych, I. (2019). Sentence-BERT: Sen-
tence embeddings using Siamese BERT-networks. In
Proceedings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the 9th Interna-
tional Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong,
China. Association for Computational Linguistics.
Ribeiro, L. F. R., Gardent, C., and Gurevych, I. (2019). En-
hancing AMR-to-text generation with dual graph repre-
sentations. In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Processing and
the 9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 3183–3194,
Hong Kong, China. Association for Computational Lin-
guistics.
Ribeiro, L. F. R., Pfeiffer, J., Zhang, Y., and Gurevych, I.
(2021a). Smelting gold and silver for improved multilin-
gual amr-to-text generation. In Proceedings of the 2021
Conference on Empirical Methods in Natural Language
Processing, EMNLP 2021, Punta Cana, November 7-11,
2021.
Ribeiro, L. F. R., Schmitt, M., Schütze, H., and Gurevych,
I. (2020a). Investigating pretrained language models for
graph-to-text generation. arXiv e-prints.
Ribeiro, L. F. R., Zhang, Y., Gardent, C., and Gurevych, I.
(2020b). Modeling global and local node contexts for
text generation from knowledge graphs. Transactions
of the Association for Computational Linguistics, 8:589–
604.
Ribeiro, L. F. R., Zhang, Y., and Gurevych, I. (2021b).
Structural adapters in pretrained language models for
AMR-to-Text generation. In Moens, M.-F., Huang, X.,
Specia, L., and Yih, S. W.-t., editors, Proceedings of the
2021 Conference on Empirical Methods in Natural Lan-
guage Processing, pages 4269–4282, Online and Punta
Cana, Dominican Republic. Association for Computa-
tional Linguistics.
Schlichtkrull, M. S., Kipf, T. N., Bloem, P., van den Berg,
R., Titov, I., and Welling, M. (2018). Modeling relational
data with graph convolutional networks. In ESWC 2018,
Heraklion, Crete, Greece, June 3-7, 2018, Proceedings,
pages 593–607.
Schmitt, M., Ribeiro, L. F. R., Dufter, P., Gurevych, I., and
Schütze, H. (2021). Modeling graph structure via relative
position for text generation from knowledge graphs. In
Proceedings of the Fifteenth Workshop on Graph-Based
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
870