Cho, K., Van Merri
¨
enboer, B., Gulcehre, C., Bahdanau, D.,
Bougares, F., Schwenk, H., and Bengio, Y. (2014).
Learning phrase representations using rnn encoder-
decoder for statistical machine translation. arXiv
preprint arXiv:1406.1078.
Cires¸an, D., Meier, U., and Schmidhuber, J. (2012). Multi-
column deep neural networks for image classification.
arXiv preprint arXiv:1202.2745.
Deng, L. and Liu, Y. (2018). Deep Learning in Natural
Language Processing. Springer.
Gelderman, C. J., Ghijsen, P. W. T., and Brugman,
M. J. (2006). Public procurement and EU tendering
directives–explaining non-compliance. International
Journal of Public Sector Management, 19(7):702–
714.
Graves, A., Fernandez, S., and Schmidhuber, J. (2005).
Bidirectional LSTM networks for improved phoneme
classification and recognition. In International Con-
ference on Artificial Neural Networks, pages 799–804.
Springer.
Hardeniya, N., Perkins, J., Chopra, D., Joshi, N., and
Mathur, I. (2016). Natural Language Processing:
Python and NLTK. Packt Publishing Ltd.
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term
memory. Neural computation, 9(8):1735–1780.
Hoskins, J. C., Kaliyur, K. M., and Himmelblau, D. M.
(1990). Incipient fault detection and diagnosis us-
ing artificial neural networks. In IJCNN 1990, Inter-
national Joint Conference on Neural Networks, San
Diego, CA, USA, June 17-21, 1990, pages 81–86.
Joachims, T. (2002). Learning to classify text using sup-
port vector machines, volume 668. Springer Science
& Business Media.
Le, Q. and Mikolov, T. (2014). Distributed representations
of sentences and documents. In International confer-
ence on machine learning, pages 1188–1196.
Le, Q. V., Ranzato, M., Monga, R., Devin, M., Chen, K.,
Corrado, G. S., Dean, J., and Ng, A. Y. (2011). Build-
ing high-level features using large scale unsupervised
learning. arXiv preprint arXiv:1112.6209.
Liang, D., Altosaar, J., Charlin, L., and Blei, D. M. (2016).
Factorization meets the item embedding: Regulariz-
ing matrix factorization with item co-occurrence. In
Proceedings of the 10th ACM conference on recom-
mender systems, pages 59–66. ACM.
Manning, C., Surdeanu, M., Bauer, J., Finkel, J., Bethard,
S., and McClosky, D. (2014). The stanford corenlp
natural language processing toolkit. In Proceedings
of 52nd annual meeting of the association for com-
putational linguistics: system demonstrations, pages
55–60.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and
Dean, J. (2013). Distributed representations of words
and phrases and their compositionality. In Advances in
neural information processing systems, pages 3111–
3119.
Pak, I. and Teh, P. L. (2018). Text segmentation techniques:
a critical review. In Innovative Computing, Optimiza-
tion and Its Applications, pages 167–181. Springer.
Pennington, J., Socher, R., and Manning, C. (2014). Glove:
Global vectors for word representation. In Proceed-
ings of the 2014 conference on empirical methods in
natural language processing (EMNLP), pages 1532–
1543.
Poliak, A., Rastogi, P., Martin, M. P., and Van Durme, B.
(2017). Efficient, compositional, order-sensitive n-
gram embeddings. In Proceedings of the 15th Confer-
ence of the European Chapter of the Association for
Computational Linguistics: Volume 2, Short Papers,
pages 503–508.
Sassano, M. (2003). Virtual examples for text classifica-
tion with support vector machines. In Proceedings of
the 2003 conference on Empirical methods in natural
language processing, pages 208–215. Association for
Computational Linguistics.
Skovajsov
´
a, L. (2017). Long short-term memory descrip-
tion and its application in text processing. In 2017
Communication and Information Technologies (KIT),
pages 1–4. IEEE.
Sundermeyer, M., Schluter, R., and Ney, H. (2012). LSTM
neural networks for language modeling. In Thirteenth
annual conference of the international speech commu-
nication association, pages 194–197.
Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Se-
quence to sequence learning with neural networks. In
Advances in neural information processing systems,
pages 3104–3112.
Towell, G. G. and Shavlik, J. W. (1993). Extracting refined
rules from knowledge-based neural networks. Ma-
chine learning, 13(1):71–101.
Turian, J., Ratinov, L., and Bengio, Y. (2010). Word rep-
resentations: a simple and general method for semi-
supervised learning. In Proceedings of the 48th an-
nual meeting of the association for computational lin-
guistics, pages 384–394. Association for Computa-
tional Linguistics.
Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., and
Lang, K. J. (1995). Phoneme recognition using time-
delay neural networks. Backpropagation: Theory, Ar-
chitectures and Applications, pages 35–61.
Wang, J., Yu, L.-C., Lai, K. R., and Zhang, X. (2016). Di-
mensional sentiment analysis using a regional CNN-
LSTM model. In Proceedings of the 54th Annual
Meeting of the Association for Computational Lin-
guistics (Volume 2: Short Papers), volume 2, pages
225–230.
Wang, J.-H., Liu, T.-W., Luo, X., and Wang, L. (2018). An
LSTM approach to short text sentiment classification
with word embeddings. In Proceedings of the 30th
Conference on Computational Linguistics and Speech
Processing (ROCLING 2018), pages 214–223.
Zhang, K., Xu, J., Min, M. R., Jiang, G., Pelechrinis, K.,
and Zhang, H. (2016). Automated it system failure
prediction: A deep learning approach. In 2016 IEEE
International Conference on Big Data (Big Data),
pages 1291–1300. IEEE.
Zhang, Y. and Lu, X. (2018). A speech recognition acoustic
model based on lstm-ctc. In 2018 IEEE 18th Inter-
national Conference on Communication Technology
(ICCT), pages 1052–1055. IEEE.
A Mixed Neural Network and Support Vector Machine Model for Tender Creation in the European Union TED Database
145