Kingma, D. and Ba, J. (2014). Adam: A method
for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Labutov, I. and Lipson, H. (2013). Re-embedding words. In
ACL (2), pages 489–493.
Le, Q. V. and Mikolov, T. (2014). Distributed represen-
tations of sentences and documents. In ICML, vol-
ume 14, pages 1188–1196.
Liu, Y., Liu, Z., Chua, T.-S., and Sun, M. (2015). Topical
word embeddings. In AAAI, pages 2418–2424.
Martinez-Camara, E., MartnValdivia, M. T., Lopez, L.
A. U., , and Raez, A. M. (2014). Sentiment analy-
sis in twitter. In Natural Language Engineering and
20(1):128.
Mejova, Y., Weber, I., , and Macy, M. W. (2015). Twitter: A
digital socioscope. In Twitter: A Digital Socioscope.
Cambridge University Press and Cambridge and UK.
Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a).
Efficient estimation of word representations in vector
space. arXiv preprint arXiv:1301.3781.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and
Dean, J. (2013b). Distributed representations of words
and phrases and their compositionality. In Advances in
neural information processing systems, pages 3111–
3119.
Mohammad, S. M., Kiritchenko, S., and Zhu, X. (2013).
Nrc-canada: Building the state-of-the-art in sentiment
analysis of tweets. arXiv preprint arXiv:1308.6242.
Nakov, P., Ritter, A., Rosenthal, S., Sebastiani, F., and Stoy-
anov, V. (2016). Semeval-2016 task 4: Sentiment
analysis in twitter. Proceedings of SemEval, pages 1–
18.
Neelakantan, A., Shankar, J., Passos, A., and McCallum, A.
(2015). Efficient non-parametric estimation of mul-
tiple embeddings per word in vector space. arXiv
preprint arXiv:1504.06654.
Pennington, J., Socher, R., and Manning, C. D. (2014).
Glove: Global vectors for word representation. In
EMNLP, volume 14, pages 1532–1543.
Poria, S., Cambria, E., and Gelbukh, A. F. (2015). Deep
convolutional neural network textual features and
multiple kernel learning for utterance-level multi-
modal sentiment analysis. In EMNLP, pages 2539–
2544.
Qiu, L., Cao, Y., Nie, Z., Yu, Y., and Rui, Y. (2014).
Learning word representation considering proximity
and ambiguity. In Twenty-Eighth AAAI Conference on
Artificial Intelligence.
Reisinger, J. and Mooney, R. J. (2010). Multi-prototype
vector-space models of word meaning. In Human
Language Technologies: The 2010 Annual Confer-
ence of the North American Chapter of the Associ-
ation for Computational Linguistics, pages 109–117.
Association for Computational Linguistics.
Ren, Y., Wang, R., and Ji, D. (2016). A topic-enhanced
word embedding for twitter sentiment classification.
Information Sciences, 369:188–198.
Rosenthal, S., Nakov, P., Kiritchenko, S., Mohammad,
S. M., Ritter, A., and Stoyanov, V. (2015). Semeval-
2015 task 10: Sentiment analysis in twitter. In Pro-
ceedings of the 9th international workshop on seman-
tic evaluation (SemEval 2015), pages 451–463.
Rosenthal, S., Ritter, A., Nakov, P., and Stoyanov, V.
(2014). Semeval-2014 task 9: Sentiment analysis in
twitter. In Proceedings of the 8th international work-
shop on semantic evaluation (SemEval 2014), pages
73–80. Dublin, Ireland.
Sag, I. A., Baldwin, T., Bond, F., Copestake, A., and
Flickinger, D. (2002). Multiword expressions: A pain
in the neck for nlp. In International Conference on In-
telligent Text Processing and Computational Linguis-
tics, pages 1–15. Springer.
Severyn, A. and Moschitti, A. (2015a). Twitter sentiment
analysis with deep convolutional neural networks. In
Proceedings of the 38th International ACM SIGIR
Conference on Research and Development in Infor-
mation Retrieval, pages 959–962. ACM.
Severyn, A. and Moschitti, A. (2015b). Unitn: Training
deep convolutional neural network for twitter senti-
ment classification. In Proceedings of the 9th In-
ternational Workshop on Semantic Evaluation (Se-
mEval 2015), Association for Computational Linguis-
tics, Denver, Colorado, pages 464–469.
Skoric, M., Poor, N., Achananuparp, P., Lim, E. P., , and
Jiang, J. (2012). Tweets and votes: A study of the
2011 singapore general election. In Proceedings of
the 45th Hawaii International Conference on System
Science (HICSS) and pp. 2583–2591.
Smith, A. N., Fischer, E., and Yongjian, C. (2012).
How does brand-related user-generated content differ
across youtube and facebook and and twitter? In Jour-
nal of Interactive Marketing 26(2) and pp. 102–113.
Socher, R., Perelygin, A., Wu, J. Y., Chuang, J., Manning,
C. D., Ng, A. Y., Potts, C., et al. (2013). Recur-
sive deep models for semantic compositionality over a
sentiment treebank. In Proceedings of the conference
on empirical methods in natural language processing
(EMNLP), volume 1631, page 1642.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I.,
and Salakhutdinov, R. (2014). Dropout: A simple way
to prevent neural networks from overfitting. The Jour-
nal of Machine Learning Research, 15(1):1929–1958.
Tang, D., Qin, B., Feng, X., and Liu, T. (2015). Target-
dependent sentiment classification with long short
term memory. CoRR, abs/1512.01100.
Tang, D., Wei, F., Qin, B., Liu, T., and Zhou, M. (2014a).
Coooolll: A deep learning system for twitter senti-
ment classification. In Proceedings of the 8th Inter-
national Workshop on Semantic Evaluation (SemEval
2014), pages 208–212.
Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., and Qin,
B. (2014b). Learning sentiment-specific word embed-
ding for twitter sentiment classification. In ACL (1),
pages 1555–1565.
Tian, F., Dai, H., Bian, J., Gao, B., Zhang, R., Chen, E., and
Liu, T.-Y. (2014). A probabilistic model for learning
multi-prototype word embeddings. In COLING, pages
151–160.