2020) can be applied to this study. Moreover, uncer-
tainty analysis can be applied to large scale intent de-
tection task since there are too many categories that
are difficult to distinguish from each other.
REFERENCES
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z.,
Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin,
M., Ghemawat, S., Goodfellow, I., Harp, A., Irving,
G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kud-
lur, M., Levenberg, J., Man
´
e, D., Monga, R., Moore,
S., Murray, D., Olah, C., Schuster, M., Shlens, J.,
Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Van-
houcke, V., Vasudevan, V., Vi
´
egas, F., Vinyals, O.,
Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and
Zheng, X. (2015). TensorFlow: Large-scale machine
learning on heterogeneous systems. Software avail-
able from tensorflow.org.
Ayata, D., Sarac¸lar, M., and
¨
Ozg
¨
ur, A. (2017). Turkish
tweet sentiment analysis with word embedding and
machine learning. In 2017 25th Signal Processing
and Communications Applications Conference (SIU),
pages 1–4. IEEE.
Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T.
(2017). Enriching word vectors with subword infor-
mation. Transactions of the Association for Computa-
tional Linguistics, 5:135–146.
Chollet, F. et al. (2015). Keras. https://keras.io.
Clark, K., Luong, M.-T., Le, Q. V., and Manning, C. D.
(2020). ELECTRA: Pre-training text encoders as dis-
criminators rather than generators. In ICLR.
D
¨
undar, E. B. and Alpaydın, E. (2019). Learning word
representations with deep neural networks for turkish.
In 2019 27th Signal Processing and Communications
Applications Conference (SIU), pages 1–4. IEEE.
D
¨
undar, E. B., C¸ ekic¸, T., Deniz, O., and Arslan, S. (2018).
A hybrid approach to question-answering for a bank-
ing chatbot on turkish: Extending keywords with em-
bedding vectors. In KDIR, pages 169–175.
Kenton, J. D. M.-W. C., Kristina, L., and Devlin, J. (2018).
Bert: Pre-training of deep bidirectional transformers
for language understanding.
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma,
P., and Soricut, R. (2019). Albert: A lite bert for
self-supervised learning of language representations.
arXiv preprint arXiv:1909.11942.
Liu, B. and Lane, I. (2016). Attention-based recurrent neu-
ral network models for joint intent detection and slot
filling. arXiv preprint arXiv:1609.01454.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,
Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov,
V. (2019). Roberta: A robustly optimized bert pre-
training approach. arXiv preprint arXiv:1907.11692.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and
Dean, J. (2013). Distributed representations of words
and phrases and their compositionality. In Advances in
neural information processing systems, pages 3111–
3119.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
Chanan, G., Killeen, T., Lin, Z., Gimelshein, N.,
Antiga, L., Desmaison, A., Kopf, A., Yang, E., De-
Vito, Z., Raison, M., Tejani, A., Chilamkurthy, S.,
Steiner, B., Fang, L., Bai, J., and Chintala, S. (2019).
Pytorch: An imperative style, high-performance deep
learning library. In Wallach, H., Larochelle, H.,
Beygelzimer, A., d’Alch
´
e-Buc, F., Fox, E., and Gar-
nett, R., editors, Advances in Neural Information Pro-
cessing Systems 32, pages 8024–8035. Curran Asso-
ciates, Inc.
Pennington, J., Socher, R., and Manning, C. D. (2014).
Glove: Global vectors for word representation. In
Proceedings of the 2014 conference on empirical
methods in natural language processing (EMNLP),
pages 1532–1543.
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M.,
Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep
contextualized word representations. arXiv preprint
arXiv:1802.05365.
Sak, H., G
¨
ung
¨
or, T., and Sarac¸lar, M. (2008). Turkish lan-
guage resources: Morphological parser, morphologi-
cal disambiguator and web corpus. In International
Conference on Natural Language Processing, pages
417–427. Springer.
Schweter, S. (2020). Berturk - bert models for turkish.
Sen, M. U. and Erdogan, H. (2014). Learning word repre-
sentations for turkish. In 2014 22nd Signal Processing
and Communications Applications Conference (SIU),
pages 1742–1745. IEEE.
Shridhar, K., Dash, A., Sahu, A., Pihlgren, G. G., Alonso,
P., Pondenkandath, V., Kov
´
acs, G., Simistira, F., and
Liwicki, M. (2019). Subword semantic hashing for in-
tent classification on small datasets. In 2019 Interna-
tional Joint Conference on Neural Networks (IJCNN),
pages 1–6. IEEE.
Sogancioglu, G., K
¨
oro
˘
glu, B. A., and Agin, O. Multi-
label topic classification of turkish sentences using
cascaded approach for dialog management system.
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C.,
Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz,
M., and Brew, J. (2019). Huggingface’s transformers:
State-of-the-art natural language processing. ArXiv,
abs/1910.03771.
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov,
R. R., and Le, Q. V. (2019). Xlnet: Generalized au-
toregressive pretraining for language understanding.
In Advances in neural information processing sys-
tems, pages 5754–5764.
KDIR 2020 - 12th International Conference on Knowledge Discovery and Information Retrieval
192