Computational Linguistics: Technical Papers, pages
3519–3530.
Dubuisson Duplessis, G., Charras, F., Letard, V., Ligozat,
A.-L., and Rosset, S. (2017). Utterance Retrieval
based on Recurrent Surface Text Patterns. In 39th
European Conference on Information Retrieval, Ab-
erdeen, United Kingdom.
Fu, P., Lin, Z., Yuan, F., Wang, W., and Meng, D.
(2018). Learning sentiment-specific word embedding
via global sentiment representation. In Thirty-Second
AAAI Conference on Artificial Intelligence.
Fulda, N., Etchart, T., Myers, W., Ricks, D., Brown, Z.,
Szendre, J., Murdoch, B., Carr, A., and Wingate, D.
(2018). Byu-eve: Mixed initiative dialog via struc-
tured knowledge graph traversal and conversational
scaffolding. In Proceedings of the 2018 Amazon Alexa
Prize.
Fulda, N., Ricks, D., Murdoch, B., and Wingate, D.
(2017a). What can you do with a rock? affordance ex-
traction via word embeddings. In Proceedings of the
Twenty-Sixth International Joint Conference on Artifi-
cial Intelligence, IJCAI-17, pages 1039–1045.
Fulda, N., Tibbetts, N., Brown, Z., and Wingate, D.
(2017b). Harvesting common-sense navigational
knowledge for robotics from uncurated text corpora.
In Proceedings of the First Conference on Robot
Learning (CoRL) - forthcoming.
Gandhe, S. and Traum, D. (2013). Surface text based dia-
logue models for virtual humans. In Proceedings of
the SIGDIAL 2013 Conference, pages 251–260.
Gladkova, A., Drozd, A., and Matsuoka, S. (2016).
Analogy-based detection of morphological and se-
mantic relations with word embeddings: what works
and what doesn’t. In Proceedings of the NAACL Stu-
dent Research Workshop, pages 8–15.
Kim, J.-K., de Marneffe, M.-C., and Fosler-Lussier, E.
(2016). Adjusting word embeddings with semantic
intensity orders. In Proceedings of the 1st Workshop
on Representation Learning for NLP, pages 62–69.
Kiros, R., Zhu, Y., Salakhutdinov, R., Zemel, R. S., Tor-
ralba, A., Urtasun, R., and Fidler, S. (2015). Skip-
thought vectors. CoRR, abs/1506.06726.
Li, Y., Su, H., Shen, X., Li, W., Cao, Z., and Niu, S. (2017).
DailyDialog: A Manually Labelled Multi-turn Dia-
logue Dataset. arXiv e-prints, page arXiv:1710.03957.
Logeswaran, L. and Lee, H. (2018). An efficient framework
for learning sentence representations. In International
Conference on Learning Representations.
Lowe, R., Pow, N., Serban, I., and Pineau, J. (2015). The
Ubuntu Dialogue Corpus: A Large Dataset for Re-
search in Unstructured Multi-Turn Dialogue Systems.
arXiv e-prints, page arXiv:1506.08909.
Lowe, R. T., Pow, N., Serban, I. V., Charlin, L., Liu, C., and
Pineau, J. (2017). Training end-to-end dialogue sys-
tems with the ubuntu dialogue corpus. D&D, 8(1):31–
65.
Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a).
Efficient estimation of word representations in vector
space. CoRR, abs/1301.3781.
Mikolov, T., tau Yih, W., and Zweig, G. (2013b). Linguistic
regularities in continuous space word representations.
Association for Computational Linguistics.
Nio, L., Sakti, S., Neubig, G., Yoshino, K., and Nakamura,
S. (2016). Neural network approaches to dialog re-
sponse retrieval and generation. IEICE Transactions,
99-D(10):2508–2517.
Patro, B. N., Kurmi, V. K., Kumar, S., and Namboodiri, V. P.
(2018). Learning semantic sentence embeddings us-
ing sequential pair-wise discriminator. arXiv preprint
arXiv:1806.00807.
Pennington, J., Socher, R., and Manning, C. D. (2014).
Glove: Global vectors for word representation. In
Empirical Methods in Natural Language Processing
(EMNLP), pages 1532–1543.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and
Sutskever, I. Language models are unsupervised mul-
titask learners.
Rainie, H. and Anderson, J. Q. (2017). The future of free
speech, trolls, anonymity and fake news online.
Salton, G. and McGill, M. J. (1986). Introduction to Mod-
ern Information Retrieval. McGraw-Hill, Inc., New
York, NY, USA.
Schneider, S. J., Kerwin, J., Frechtling, J., and Vivari, B. A.
(2002). Characteristics of the discussion in online and
face-to-face focus groups. Social science computer
review, 20(1):31–42.
Serban, I. V., Sordoni, A., Bengio, Y., Courville, A. C.,
and Pineau, J. (2016). Building end-to-end dialogue
systems using generative hierarchical neural network
models. In AAAI.
Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to
sequence learning with neural networks. In Ghahra-
mani, Z., Welling, M., Cortes, C., Lawrence, N. D.,
and Weinberger, K. Q., editors, Advances in Neu-
ral Information Processing Systems 27, pages 3104–
3112. Curran Associates, Inc.
Thalenberg, B. (2016). Distinguishing antonyms from syn-
onyms in vector space models of semantics. Technical
report.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I.
(2017). Attention is all you need. In Guyon, I.,
Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R.,
Vishwanathan, S., and Garnett, R., editors, Advances
in Neural Information Processing Systems 30, pages
5998–6008. Curran Associates, Inc.
Wu, Y., Wu, W., Xing, C., Zhou, M., and Li, Z. (2017).
Sequential matching network: A new architecture for
multi-turn response selection in retrieval-based chat-
bots. pages 496–505.
Zhu, X., Li, T., and De Melo, G. (2018). Exploring seman-
tic properties of sentence embeddings. In Proceed-
ings of the 56th Annual Meeting of the Association for
Computational Linguistics (Volume 2: Short Papers),
pages 632–637.
ICAART 2020 - 12th International Conference on Agents and Artificial Intelligence
78