Domeniconi, G., Moro, G., Pagliarani, A., and Pasolini, R.
(2015b). Markov chain based method for in-domain
and cross-domain sentiment classification. In Pro-
ceedings of the 7th International Joint Conference on
Knowledge Discovery, Knowledge Engineering and
Knowledge Management, pages 127–137. Scitepress.
Domeniconi, G., Moro, G., Pagliarani, A., and Pasolini,
R. (2017). Learning to predict the stock market dow
jones index detecting and mining relevant tweets. In
Proceedings of the 9th International Joint Confer-
ence on Knowledge Discovery, Knowledge Engineer-
ing and Knowledge Management.
Dos Santos, C. N. and Gatti, M. (2014). Deep convolutional
neural networks for sentiment analysis of short texts.
In COLING, pages 69–78.
Franco-Salvador, M., Cruz, F. L., Troyano, J. A., and Rosso,
P. (2015). Cross-domain polarity classification using
a knowledge-enhanced meta-classifier. Knowledge-
Based Systems, 86:46–56.
Frank, E., Hall, M., Holmes, G., Kirkby, R., Pfahringer,
B., Witten, I. H., and Trigg, L. (2005). Weka. Data
Mining and Knowledge Discovery Handbook, pages
1305–1314.
Glorot, X., Bordes, A., and Bengio, Y. (2011). Domain
adaptation for large-scale sentiment classification: A
deep learning approach. In Proceedings of the 28th In-
ternational Conference on Machine Learning (ICML-
11), pages 513–520.
He, Y., Lin, C., and Alani, H. (2011). Automatically ex-
tracting polarity-bearing topics for cross-domain sen-
timent classification. In Proceedings of the 49th An-
nual Meeting of the Association for Computational
Linguistics: Human Language Technologies-Volume
1, pages 123–131. Association for Computational Lin-
guistics.
Kumar, A., Irsoy, O., Su, J., Bradbury, J., English, R.,
Pierce, B., Ondruska, P., Gulrajani, I., and Socher,
R. (2015). Ask me anything: Dynamic memory
networks for natural language processing. CoRR,
abs/1506.07285.
Le, Q. V. and Mikolov, T. (2014). Distributed represen-
tations of sentences and documents. In ICML, vol-
ume 14, pages 1188–1196.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learn-
ing. Nature, 521(7553):436–444.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and
Dean, J. (2013). Distributed representations of words
and phrases and their compositionality. In Advances in
neural information processing systems, pages 3111–
3119.
Pan, S. J., Ni, X., Sun, J.-T., Yang, Q., and Chen, Z.
(2010). Cross-domain sentiment classification via
spectral feature alignment. In Proceedings of the 19th
international conference on World wide web - WWW
2010, pages 751–760. Association for Computing Ma-
chinery (ACM).
Pan, S. J. and Yang, Q. (2010). A survey on transfer learn-
ing. IEEE Transactions on Knowledge and Data En-
gineering, 22(10):1345–1359.
Rehurek, R. and Sojka, P. (2010). Software framework for
topic modelling with large corpora. In Proceedings of
the LREC 2010 workshop on new challenges for NLP
frameworks. University of Malta.
Socher, R., Perelygin, A., Wu, J. Y., Chuang, J., Man-
ning, C. D., Ng, A. Y., and Potts, C. (2013). Recur-
sive deep models for semantic compositionality over a
sentiment treebank. In Proceedings of the conference
on empirical methods in natural language processing
(EMNLP), pages 1631–1642.
Tang, D., Qin, B., and Liu, T. (2015). Document model-
ing with gated recurrent neural network for sentiment
classification. In Proceedings of the 2015 Conference
on Empirical Methods in Natural Language Process-
ing, pages 1422–1432. Association for Computational
Linguistics (ACL).
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and
Manzagol, P.-A. (2010). Stacked denoising autoen-
coders: Learning useful representations in a deep net-
work with a local denoising criterion. Journal of Ma-
chine Learning Research, 11(Dec):3371–3408.
Williams, D. and Hinton, G. (1986). Learning rep-
resentations by back-propagating errors. Nature,
323(6088):533–538.
Zhang, X. and LeCun, Y. (2015). Text understanding from
scratch. arXiv preprint arXiv:1502.01710.
Zhang, Y., Hu, X., Li, P., Li, L., and Wu, X. (2015).
Cross-domain sentiment classification-feature diver-
gence, polarity divergence or both? Pattern Recog-
nition Letters, 65:44–50.