decoder for statistical machine translation. arXiv pre-
print arXiv:1406.1078.
Dai, A. M. and Le, Q. V. (2015). Semi-supervised sequence
learning. In Advances in Neural Information Proces-
sing Systems, pages 3079–3087.
Dieng, A. B., Wang, C., Gao, J., and Paisley, J. W. (2016).
Topicrnn: A recurrent neural network with long-range
semantic dependency. CoRR, abs/1611.01702.
Firth, J. R. (1957). A synopsis of linguistic theory, 1930-
1955. Studies in linguistic analysis.
Harris, A. and Jones, S. H. (2016). Words. In Writing for
Performance, pages 19–35. Springer.
Hermann, K. M., Kocisk
´
y, T., Grefenstette, E., Espeholt,
L., Kay, W., Suleyman, M., and Blunsom, P. (2015).
Teaching machines to read and comprehend. CoRR,
abs/1506.03340.
Hu, Z., Yang, Z., Liang, X., Salakhutdinov, R., and Xing,
E. P. (2017). Toward controlled generation of text. In
International Conference on Machine Learning, pa-
ges 1587–1596.
Johnson, R. and Zhang, T. (2016). Supervised and semi-
supervised text categorization using lstm for region
embeddings. arXiv preprint arXiv:1602.02373.
Kim, Y. (2014). Convolutional neural networks for sentence
classification. arXiv preprint arXiv:1408.5882.
Kingma, D. P. and Ba, J. (2014). Adam: A method for sto-
chastic optimization. arXiv preprint arXiv:1412.6980.
Larsson, M. and Nilsson, A. (2017). Manifold traversal for
reversing the sentiment of text.
Li, J., Jia, R., He, H., and Liang, P. (2018). Delete, retrieve,
generate: A simple approach to sentiment and style
transfer. CoRR, abs/1804.06437.
Liu, B. (2012). Sentiment analysis and opinion mining.
Synthesis lectures on human language technologies,
5(1):1–167.
Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng,
A. Y., and Potts, C. (2011). Learning word vectors
for sentiment analysis. In Proceedings of the 49th
Annual Meeting of the Association for Computatio-
nal Linguistics: Human Language Technologies, pa-
ges 142–150, Portland, Oregon, USA. Association for
Computational Linguistics.
McCann, B., Bradbury, J., Xiong, C., and Socher, R. (2017).
Learned in translation: Contextualized word vectors.
CoRR, abs/1708.00107.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and
Dean, J. (2013). Distributed representations of words
and phrases and their compositionality.
Miyato, T., Dai, A. M., and Goodfellow, I. (2016). Advers-
arial training methods for semi-supervised text classi-
fication. arXiv preprint arXiv:1605.07725.
Mohammad, S. M. and Turney, P. D. (2013). Crowdsour-
cing a word-emotion association lexicon. 29(3):436–
465.
Novikova, J., Du
ˇ
sek, O., Curry, A. C., and Rieser, V. (2017).
Why we need new evaluation metrics for nlg. arXiv
preprint arXiv:1707.06875.
Pang, B. and Lee, L. (2005). Seeing stars: Exploiting class
relationships for sentiment categorization with respect
to rating scales. In Proceedings of the 43rd annual
meeting on association for computational linguistics,
pages 115–124. Association for Computational Lin-
guistics.
Pennington, J., Socher, R., and Manning, C. (2014). Glove:
Global vectors for word representation. In Procee-
dings of the 2014 conference on empirical methods in
natural language processing (EMNLP), pages 1532–
1543.
Radford, A., Jozefowicz, R., and Sutskever, I. (2017). Le-
arning to generate reviews and discovering sentiment.
arXiv preprint arXiv:1704.01444.
Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence
to sequence learning with neural networks. In Advan-
ces in neural information processing systems, pages
3104–3112.
Tang, D., Qin, B., and Liu, T. (2015). Document mo-
deling with gated recurrent neural network for senti-
ment classification. In Proceedings of the 2015 confe-
rence on empirical methods in natural language pro-
cessing, pages 1422–1432.
Wang, Y., Huang, M., Zhao, L., et al. (2016). Attention-
based lstm for aspect-level sentiment classification.
In Proceedings of the 2016 Conference on Empirical
Methods in Natural Language Processing, pages 606–
615.
Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., and Hovy,
E. (2016). Hierarchical attention networks for docu-
ment classification. In Proceedings of the 2016 Confe-
rence of the North American Chapter of the Associa-
tion for Computational Linguistics: Human Language
Technologies, pages 1480–1489.
Yin, W., Kann, K., Yu, M., and Sch
¨
utze, H. (2017). Compa-
rative study of cnn and rnn for natural language pro-
cessing. arXiv preprint arXiv:1702.01923.
ICAART 2019 - 11th International Conference on Agents and Artificial Intelligence
816