Henderson, M., Al-Rfou, R., Strope, B., hsuan Sung, Y.,
Lukacs, L., Guo, R., Kumar, S., Miklos, B., and
Kurzweil, R. (2017). Efficient natural language re-
sponse suggestion for smart reply.
Hogan, A. (2020). SPARQL Query Language, pages 323–
448. Springer International Publishing, Cham.
Ji, S., Pan, S., Cambria, E., Marttinen, P., and Yu, P. S.
(2022). A survey on knowledge graphs: Represen-
tation, acquisition, and applications. IEEE Trans-
actions on Neural Networks and Learning Systems,
33(2):494–514.
Kalinowski, A. and An, Y. (2022). Repurposing knowledge
graph embeddings for triple representation via weak
supervision. In 2022 International Conference on In-
telligent Data Science Technologies and Applications
(IDSTA), pages 129–137.
Lapalme, G. (2020). RDFjsRealB: a symbolic approach
for generating text from RDF triples. In Castro Fer-
reira, T., Gardent, C., Ilinykh, N., van der Lee, C.,
Mille, S., Moussallem, D., and Shimorina, A., edi-
tors, Proceedings of the 3rd International Workshop
on Natural Language Generation from the Seman-
tic Web (WebNLG+), pages 144–153, Dublin, Ireland
(Virtual). Association for Computational Linguistics.
Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013).
Efficient estimation of word representations in vector
space.
Min, B., Ross, H., Sulem, E., Veyseh, A. P. B., Nguyen,
T. H., Sainz, O., Agirre, E., Heintz, I., and Roth, D.
(2023). Recent advances in natural language process-
ing via large pre-trained language models: A survey.
ACM Comput. Surv., 56(2).
Muennighoff, N., Tazi, N., Magne, L., and Reimers, N.
(2023). MTEB: Massive text embedding benchmark.
In Vlachos, A. and Augenstein, I., editors, Proceed-
ings of the 17th Conference of the European Chap-
ter of the Association for Computational Linguistics,
pages 2014–2037, Dubrovnik, Croatia. Association
for Computational Linguistics.
Pahuja, V., Gu, Y., Chen, W., Bahrami, M., Liu, L., Chen,
W.-P., and Su, Y. (2021). A systematic investigation
of KB-text embedding alignment at scale. In Zong,
C., Xia, F., Li, W., and Navigli, R., editors, Pro-
ceedings of the 59th Annual Meeting of the Associa-
tion for Computational Linguistics and the 11th Inter-
national Joint Conference on Natural Language Pro-
cessing (Volume 1: Long Papers), pages 1764–1774,
Online. Association for Computational Linguistics.
Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., and Wu, X.
(2024). Unifying large language models and knowl-
edge graphs: A roadmap. IEEE Transactions on
Knowledge and Data Engineering, pages 1–20.
Patil, R., Boit, S., Gudivada, V., and Nandigam, J. (2023).
A survey of text representation and embedding tech-
niques in nlp. IEEE Access, 11:36120–36146.
Perkovi
´
c, G., Drobnjak, A., and Boti
ˇ
cki, I. (2024). Halluci-
nations in llms: Understanding and addressing chal-
lenges. In 2024 47th MIPRO ICT and Electronics
Convention (MIPRO), pages 2084–2088.
Qader, W. A., Ameen, M. M., and Ahmed, B. I. (2019).
An overview of bag of words;importance, implemen-
tation, applications, and challenges. In 2019 Interna-
tional Engineering Conference (IEC), pages 200–204.
Regino, A. G., Caus, R. O., Hochgreb, V., and dos Reis,
J. C. (2023). From natural language texts to rdf triples:
A novel approach to generating e-commerce knowl-
edge graphs. In Coenen, F., Fred, A., Aveiro, D., Di-
etz, J., Bernardino, J., Masciari, E., and Filipe, J., ed-
itors, Knowledge Discovery, Knowledge Engineering
and Knowledge Management, pages 149–174, Cham.
Springer Nature Switzerland.
Reimers, N. and Gurevych, I. (2019). Sentence-BERT: Sen-
tence embeddings using Siamese BERT-networks. In
Inui, K., Jiang, J., Ng, V., and Wan, X., editors, Pro-
ceedings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the 9th Inter-
national Joint Conference on Natural Language Pro-
cessing (EMNLP-IJCNLP), pages 3982–3992, Hong
Kong, China. Association for Computational Linguis-
tics.
Wang, L., Yang, N., Huang, X., Jiao, B., Yang, L., Jiang, D.,
Majumder, R., and Wei, F. (2024). Text embeddings
by weakly-supervised contrastive pre-training.
Wang, Q., Mao, Z., Wang, B., and Guo, L. (2017). Knowl-
edge graph embedding: A survey of approaches and
applications. IEEE Transactions on Knowledge and
Data Engineering, 29(12):2724–2743.
Xia, P., Wu, S., and Van Durme, B. (2020). Which *BERT?
A survey organizing contextualized encoders. In Web-
ber, B., Cohn, T., He, Y., and Liu, Y., editors, Proceed-
ings of the 2020 Conference on Empirical Methods in
Natural Language Processing (EMNLP), pages 7516–
7533, Online. Association for Computational Linguis-
tics.
Yan, Q., Fan, J., Li, M., Qu, G., and Xiao, Y. (2022). A
survey on knowledge graph embedding. In 2022 7th
IEEE International Conference on Data Science in
Cyberspace (DSC), pages 576–583.
Yang, Y., Cer, D., Ahmad, A., Guo, M., Law, J., Constant,
N., Hernandez Abrego, G., Yuan, S., Tar, C., Sung,
Y.-h., Strope, B., and Kurzweil, R. (2020). Multilin-
gual universal sentence encoder for semantic retrieval.
In Celikyilmaz, A. and Wen, T.-H., editors, Proceed-
ings of the 58th Annual Meeting of the Association for
Computational Linguistics: System Demonstrations,
pages 87–94, Online. Association for Computational
Linguistics.
Zhu, H., Peng, H., Lyu, Z., Hou, L., Li, J., and Xiao, J.
(2023). Pre-training language model incorporating
domain-specific heterogeneous knowledge into a uni-
fied representation. Expert Systems with Applications,
215:119369.
Zhu, Y., Wan, J., Zhou, Z., Chen, L., Qiu, L., Zhang, W.,
Jiang, X., and Yu, Y. (2019). Triple-to-text: Convert-
ing rdf triples into high-quality natural languages via
optimizing an inverse kl divergence. In Proceedings
of the 42nd International ACM SIGIR Conference on
Research and Development in Information Retrieval,
SIGIR’19, page 455–464, New York, NY, USA. As-
sociation for Computing Machinery.
KEOD 2024 - 16th International Conference on Knowledge Engineering and Ontology Development
62