
6 FUTURE WORK AND
LIMITATIONS
Although the relation classification task has limited
applications as a single module, it has wider usages
in the relation extraction task. Therefore, we see that
our approach can be adopted to achieve new scores
in several applications that utilize the relation classi-
fication task. Further improvements can be achieved
when using NLP tools for lexical and syntactic text
features. Additionally, it would be typical to advance
our model to assign the predicted relation to the cor-
responding entities pair in the input. However, this
approach cannot be considered as an ideal way for
the relation or triple extraction task because errors in
the entities tagging step would propagate to the rela-
tion classification task. Finally, our imbalanced pre-
dictions adaptation promises enhancements if used by
similar tasks of imbalanced classes.
Our evaluation was limited by the small number
of models that reported the relation classification per-
formance. However, the results proved our model’s
superiority, denoted by the gap between our F1 score
and the closest model.
REFERENCES
Baldini Soares, L., FitzGerald, N., Ling, J., and
Kwiatkowski, T. (2019). Matching the blanks: Distri-
butional similarity for relation learning. In ACL, pages
2895–2905. Association for Computational Linguis-
tics.
Bundschus, M., Dejori, M., Stetter, M., Tresp, V., and
Kriegel, H.-P. (2008). Extraction of semantic biomed-
ical relations from text using conditional random
fields. BMC bioinformatics, 9(1):1–14.
Chen, Z., Wang, Y., Zhao, B., Cheng, J., Zhao, X., and
Duan, Z. (2020). Knowledge graph completion: A
review. Ieee Access, 8:192435–192456.
Cheng, H., Liao, L., Hu, L., and Nie, L. (2022). Multi-
relation extraction via a global-local graph convo-
lutional network. IEEE Transactions on Big Data,
8(6):1716–1728.
Chollet, F. (2021). Deep learning with Python. Simon and
Schuster.
Cohen, A. D., Rosenman, S., and Goldberg, Y. (2020).
Relation classification as two-way span-prediction.
arXiv preprint arXiv:2010.04829.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K.
(2018). Bert: Pre-training of deep bidirectional trans-
formers for language understanding. arXiv preprint
arXiv:1810.04805.
Ding, H. and Xu, X. (2022). Relation classification based
on selective entity-aware attention. In CSCWD, pages
177–182. IEEE.
Fundel, K., K
¨
uffner, R., and Zimmer, R. (2007).
Relex—relation extraction using dependency parse
trees. Bioinformatics, 23(3):365–371.
Gong, J. and Eldardiry, H. (2021). Zero-shot relation classi-
fication from side information. In CIKM, pages 576–
585.
Gormley, M. R., Yu, M., and Dredze, M. (2015). Improved
relation extraction with feature-rich compositional
embedding models. arXiv preprint arXiv:1505.02419.
Gu, J., Lu, Z., Li, H., and Li, V. O. (2016). Incorporating
copying mechanism in sequence-to-sequence learn-
ing. arXiv preprint arXiv:1603.06393.
Hendrickx, I., Kim, S. N., Kozareva, Z., Nakov, P.,
´
O S
´
eaghdha, D., Pad
´
o, S., Pennacchiotti, M., Ro-
mano, L., and Szpakowicz, S. (2010). SemEval-2010
task 8: Multi-way classification of semantic relations
between pairs of nominals. In Proceedings of the
5th International Workshop on Semantic Evaluation,
pages 33–38. Association for Computational Linguis-
tics.
Honnibal, M. and Montani, I. (2017). spacy 2: Natural lan-
guage understanding with bloom embeddings, convo-
lutional neural networks and incremental parsing. To
appear, 7(1):411–420.
Huang, Q., Sun, J., Ding, H., Wang, X., and Wang, G.
(2018). Robust liver vessel extraction using 3d u-net
with variant dice loss function. Computers in biology
and medicine, 101:153–162.
Karaevli, H. A. and G
¨
ung
¨
or, T. (2022). Enhancing rela-
tion extraction by using shortest dependency paths be-
tween entities with pre-trained language models. In
INISTA, pages 1–7.
Klein, D. and Manning, C. D. (2002). Fast exact inference
with a factored model for natural language parsing.
Advances in neural information processing systems,
15.
Lee, J., Seo, S., and Choi, Y. S. (2019). Semantic rela-
tion classification via bidirectional lstm networks with
entity-aware attention using latent entity typing. Sym-
metry, 11(6):785.
Li, C. and Tian, Y. (2020). Downstream model design
of pre-trained language model for relation extraction
task. arXiv preprint arXiv:2004.03786.
Li, X., Sun, X., Meng, Y., Liang, J., Wu, F., and Li, J.
(2020). Dice loss for data-imbalanced NLP tasks. In
ACL, pages 465–476. Association for Computational
Linguistics.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,
Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov,
V. (2019). Roberta: A robustly optimized bert pre-
training approach. arXiv preprint arXiv:1907.11692.
Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013).
Efficient estimation of word representations in vector
space. arXiv preprint arXiv:1301.3781.
Nayak, A., Timmapathini, H., Ponnalagu, K., and
Venkoparao, V. G. (2020). Domain adaptation chal-
lenges of bert in tokenization and sub-word represen-
tations of out-of-vocabulary words. In Proceedings of
the First Workshop on Insights from Negative Results
in NLP, pages 1–5.
ICAART 2024 - 16th International Conference on Agents and Artificial Intelligence
518