ered plagiarism. Therefore, NEWRITER’s architec-
ture could be extended to implement a Computational
Creativity module. Finally, the tool should be tested
with real users - scientists, writing scientific text - to
answer a survey to assert the utility of the developed
tool.
ACKNOWLEDGEMENTS
The present work was carried out with the support
of the Coordenac¸
˜
ao de Aperfeic¸oamento de Pessoal
de N
´
ıvel Superior - Brazil (CAPES) - Financing
Code 001. The authors thank the partial support of
the CNPq (Brazilian National Council for Scientific
and Technological Development), FAPEMIG (Foun-
dation for Research and Scientific and Technological
Development of Minas Gerais), and PUC Minas.
REFERENCES
Assael, Y., Sommerschield, T., and Prag, J. (2019). Restor-
ing ancient text using deep learning: a case study on
greek epigraphy. In Proceedings of the 9th Interna-
tional Joint Conference on Natural Language Pro-
cessing, IJCNLP’19, pages 6368–6375.
Aye, G. A. and Kaiser, G. E. (2020). Sequence model de-
sign for code completion in the modern IDE. CoRR,
abs/2004.05249.
Beltagy, I., Lo, K., and Cohan, A. (2019). SciBERT: Pre-
trained language model for scientific text. In Proceed-
ings of the 9th International Joint Conference on Nat-
ural Language Processing, IJCNLP’19, pages 3615–
3620.
Brown, P., Della Pietra, S., Pietra, V., Lai, J., and Mercer, R.
(1992). An estimate of an upper bound for the entropy
of english. Computational Linguistics, 18:31–40.
Chen, M. X., Lee, B. N., Bansal, G., Cao, Y., Zhang, S.,
Lu, J., Tsay, J., Wang, Y., Dai, A. M., Chen, Z.,
Sohn, T., and Wu, Y. (2019). Gmail smart com-
pose: Real-time assisted writing. In Proceedings of
the 25th ACM SIGKDD International Conference on
Knowledge Discovery & Data Mining, KDD ’19, page
2287–2295.
Dai, Z., Yang, Z., Yang, Y., Carbonell, J. G., Le, Q. V.,
and Salakhutdinov, R. (2019). Transformer-XL: At-
tentive language models beyond a fixed-length con-
text. CoRR, abs/1901.02860.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K.
(2019). BERT: Pre-training of deep bidirectional
Transformers for language understanding. CoRR,
abs/1810.04805.
Donahue, C., Lee, M., and Liang, P. (2020). En-
abling language models to fill in the blanks. ArXiv,
abs/2005.05339.
Esteva, A., Kale, A., Paulus, R., Hashimoto, K., Yin,
W., Radev, D., and Socher, R. (2020). CO-
Search: COVID-19 information retrieval with seman-
tic search, question answering, and abstractive sum-
marization. CoRR, abs/2006.09595.
Grangier, D. and Auli, M. (2018). QuickEdit: Editing text
& translations by crossing words out. In Proceed-
ings of the 2018 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, NAACL-
HLT’18, pages 272–282.
Ito, T., Kuribayashi, T., Kobayashi, H., Brassard, A., Hagi-
wara, M., Suzuki, J., and Inui, K. (2019). Diamonds
in the rough: Generating fluent sentences from early-
stage drafts for academic writing assistance. In Pro-
ceedings of the 12th International Conference on Nat-
ural Language Generation, INLG’19, pages 40–53.
Kieuvongngam, V., Tan, B., and Niu, Y. (2020). Au-
tomatic text summarization of COVID-19 medical
research articles using BERT and GPT-2. CoRR,
abs/2006.01997.
Li, L., Jiang, X., and Liu, Q. (2019). Pretrained language
models for document-level neural machine transla-
tion. CoRR, abs/1911.03110.
Liu, C., Zhu, S., Zhao, Z., Cao, R., Chen, L., and Yu, K.
(2020). Jointly encoding word confusion network and
dialogue context with BERT for spoken language un-
derstanding. CoRR, abs/2005.11640.
Mikolov, T., Yih, W.-t., and Zweig, G. (2013). Linguis-
tic regularities in continuous space word representa-
tions. In Proceedings of the 2013 Conference of the
North American Chapter of the Association for Com-
putational Linguistics: Human Language Technolo-
gies, NAACL-HLT’13, pages 746–751.
Pessoa, T., Medeiros, R., Nepomuceno, T., Bian, G.-B., Al-
buquerque, V., and Filho, P. P. (2018). Performance
analysis of Google Colaboratory as a tool for acceler-
ating deep learning applications. IEEE Access, PP:1–
1.
Shih, Y.-S., Chang, W.-C., and Yang, Y. (2019). XL-
Editor: Post-editing sentences with xlnet. CoRR,
abs/1910.10479.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L. u., and Polosukhin,
I. (2017). Attention is all you need. In Guyon,
I., Luxburg, U. V., Bengio, S., Wallach, H., Fer-
gus, R., Vishwanathan, S., and Garnett, R., editors,
Advances in Neural Information Processing Systems,
volume 30, pages 5998–6008.
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C.,
Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M.,
and Brew, J. (2019). HuggingFace’s Transformers:
State-of-the-art natural language processing. CoRR,
abs/1910.03771.
ICEIS 2021 - 23rd International Conference on Enterprise Information Systems
566