ment?”: An interpretable machine learning approach.
PloS one, 12(8).
Bach, S., Binder, A., Montavon, G., Klauschen, F., M
¨
uller,
K.-R., and Samek, W. (2015). On pixel-wise explana-
tions for non-linear classifier decisions by layer-wise
relevance propagation. PloS one, 10(7).
Bird, S., Klein, E., and Loper, E. (2009). Natural language
processing with Python: analyzing text with the natu-
ral language toolkit. ” O’Reilly Media, Inc.”.
Choi, K., Fazekas, G., and Sandler, M. (2016). Explaining
deep convolutional neural networks on music classifi-
cation. arXiv preprint arXiv:1607.02444.
Collobert, R., Weston, J., Bottou, L., Karlen, M.,
Kavukcuoglu, K., and Kuksa, P. (2011). Natural lan-
guage processing (almost) from scratch. Journal of
machine learning research, 12(Aug):2493–2537.
Du, M., Liu, N., and Hu, X. (2019). Techniques for in-
terpretable machine learning. Communications of the
ACM, 63(1):68–77.
Gunning, D. (2017). Explainable artificial intelligence
(xai). Defense Advanced Research Projects Agency
(DARPA), nd Web, 2.
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term
memory. Neural computation, 9(8):1735–1780.
Kalchbrenner, N., Grefenstette, E., and Blunsom, P. (2014).
A convolutional neural network for modelling sen-
tences. arXiv preprint arXiv:1404.2188.
Kim, Y. (2014). Convolutional neural networks for sentence
classification. arXiv preprint arXiv:1408.5882.
Le, H. T., Cerisara, C., and Denis, A. (2018). Do convolu-
tional networks need to be deep for text classification?
In Workshops at the Thirty-Second AAAI Conference
on Artificial Intelligence.
Li, J., Chen, X., Hovy, E., and Jurafsky, D. (2015). Visual-
izing and understanding neural models in nlp. arXiv
preprint arXiv:1506.01066.
Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y.,
and Potts, C. (2011). Learning word vectors for sen-
timent analysis. In Proceedings of the 49th annual
meeting of the association for computational linguis-
tics: Human language technologies-volume 1, pages
142–150. Association for Computational Linguistics.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and
Dean, J. (2013). Distributed representations of words
and phrases and their compositionality. In Advances in
neural information processing systems, pages 3111–
3119.
Montavon, G., Samek, W., and M
¨
uller, K.-R. (2018). Meth-
ods for interpreting and understanding deep neural
networks. Digital Signal Processing, 73:1–15.
Pennington, J., Socher, R., and Manning, C. D. (2014).
Glove: Global vectors for word representation. In
Proceedings of the 2014 conference on empirical
methods in natural language processing (EMNLP),
pages 1532–1543.
Qin, Z., Yu, F., Liu, C., and Chen, X. (2018). How con-
volutional neural network see the world-a survey of
convolutional neural network visualization methods.
arXiv preprint arXiv:1804.11191.
Rajwadi, M., Glackin, C., Wall, J., Chollet, G., and Can-
nings, N. (2019). Explaining sentiment classification.
Interspeech 2019, pages 56–60.
Rehurek, R. and Sojka, P. (2010). Software framework for
topic modelling with large corpora. In In Proceedings
of the LREC 2010 Workshop on New Challenges for
NLP Frameworks. Citeseer.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why
should i trust you?” explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and
data mining, pages 1135–1144.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2013).
Deep inside convolutional networks: Visualising im-
age classification models and saliency maps. arXiv
preprint arXiv:1312.6034.
Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H.,
Wang, X., and Tang, X. (2017). Residual attention
network for image classification. In Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition, pages 3156–3164.
Wood-Doughty, Z., Andrews, N., and Dredze, M. (2018).
Convolutions are all you need (for classifying charac-
ter sequences). In Proceedings of the 2018 EMNLP
Workshop W-NUT: The 4th Workshop on Noisy User-
generated Text, pages 208–213.
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudi-
nov, R., Zemel, R., and Bengio, Y. (2015). Show, at-
tend and tell: Neural image caption generation with
visual attention. In International conference on ma-
chine learning, pages 2048–2057.
Yin, W., Kann, K., Yu, M., and Sch
¨
utze, H. (2017). Com-
parative study of cnn and rnn for natural language pro-
cessing. arXiv preprint arXiv:1702.01923.
Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., and Lipson,
H. (2015). Understanding neural networks through
deep visualization. arXiv preprint arXiv:1506.06579.
Zeiler, M. D. and Fergus, R. (2014). Visualizing and under-
standing convolutional networks. In European confer-
ence on computer vision, pages 818–833. Springer.
Interpreting Convolutional Networks Trained on Textual Data
203