REFERENCES
Abeyrathna, K. D., Granmo, O.-C., Zhang, X., Jiao, L., and
Goodwin, M. (2019). The regression Tsetlin machine:
A novel approach to interpretable nonlinear regres-
sion. Philosophical Transactions of the Royal Society
A: Mathematical, Physical and Engineering Sciences,
378.
Agirre, E. and Edmonds, P. (2007). Word sense disambigua-
tion: Algorithms and applications. In Springer, Dor-
drecht.
Berge, G. T., Granmo, O., Tveit, T. O., Goodwin, M.,
Jiao, L., and Matheussen, B. V. (2019). Using the
Tsetlin machine to learn human-interpretable rules for
high-accuracy text categorization with medical appli-
cations. IEEE Access, 7:115134–115146.
Bhattarai, B., Granmo, O.-C., and Jiao, L. (2020). Mea-
suring the novelty of natural language text using the
conjunctive clauses of a Tsetlin machine text classi-
fier. ArXiv, abs/2011.08755.
Buhrmester, V., M
¨
unch, D., and Arens, M. (2019). Analysis
of explainers of black box deep neural networks for
computer vision: A survey.
de Lacalle, O. L. and Agirre, E. (2015). A methodology
for word sense disambiguation at 90% based on large-
scale crowdsourcing. In SEM@NAACL-HLT.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K.
(2018). BERT: Pre-training of deep bidirectional
transformers for language understanding. arXiv
preprint arXiv:1810.04805.
Dongsuk, O., Kwon, S., Kim, K., and Ko, Y. (2018).
Word sense disambiguation based on word similarity
calculation using word vector representation from a
knowledge-based graph. In COLING.
Granmo, O.-C. (2018). The Tsetlin machine - a game theo-
retic bandit driven approach to optimal pattern recog-
nition with propositional logic.
Granmo, O.-C., Glimsdal, S., Jiao, L., Goodwin, M., Om-
lin, C. W., and Berge, G. T. (2019). The convolutional
Tsetlin machine.
Hadiwinoto, C., Ng, H. T., and Gan, W. C. (2019). Im-
proved word sense disambiguation using pre-trained
contextualized word representations.
K
˚
ageb
¨
ack, M. and Salomonsson, H. (2016). Word sense
disambiguation using a bidirectional LSTM. In Co-
gALex@COLING.
Khattak, F. K., Jeblee, S., Pou-Prom, C., Abdalla, M.,
Meaney, C., and Rudzicz, F. (2019). A survey of word
embeddings for clinical text. Journal of Biomedical
Informatics: X, 4:100057.
Lazreg, M. B., Goodwin, M., and Granmo, O.-C. (2020).
Combining a context aware neural network with a de-
noising autoencoder for measuring string similarities.
Computer Speech & Language, 60:101028.
Liao, K., Ye, D., and Xi, Y. (2010). Research on enter-
prise text knowledge classification based on knowl-
edge schema. In 2010 2nd IEEE International Confer-
ence on Information Management and Engineering,
pages 452–456.
Loureiro, D., Rezaee, K., Pilehvar, M. T., and Camacho-
Collados, J. (2020). Language models and word sense
disambiguation: An overview and analysis.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S.,
and Dean, J. (2013). Distributed representations of
words and phrases and their compositionality. ArXiv,
abs/1310.4546.
Miller, G. A., Chodorow, M., Landes, S., Leacock, C., and
Thomas, R. G. (1994). Using a semantic concordance
for sense identification. In HLT.
Navigli, R., Camacho-Collados, J., and Raganato, A.
(2017). Word sense disambiguation: A unified evalu-
ation framework and empirical comparison. In EACL.
Navigli, R. and Velardi, P. (2004). Structural semantic in-
terconnection: A knowledge-based approach to word
sense disambiguation. In SENSEVAL@ACL.
Rezaeinia, S. M., Rahmani, R., Ghodsi, A., and Veisi, H.
(2019). Sentiment analysis based on improved pre-
trained word embeddings. Expert Systems with Appli-
cations, 117:139–147.
Rudin, C. (2018). Stop explaining black box machine learn-
ing models for high stakes decisions and use inter-
pretable models instead.
Sadi, M. F., Ansari, E., and Afsharchi, M. (2019). Super-
vised word sense disambiguation using new features
based on word embeddings. J. Intell. Fuzzy Syst.,
37:1467–1476.
Saha, R., Granmo, O.-C., and Goodwin, M. (2020). Mining
interpretable rules for sentiment and semantic relation
analysis using Tsetlin machines. In Artificial Intel-
ligence XXXVII, pages 67–78. Springer International
Publishing.
Sonkar, S., Waters, A. E., and Baraniuk, R. G.
(2020). Attention word embedding. arXiv preprint
arXiv:2006.00988.
Taghipour, K. and Ng, H. T. (2015). One million sense-
tagged instances for word sense disambiguation and
induction. In CoNLL.
Tripodi, R. and Pelillo, M. (2017). A game-theoretic ap-
proach to word sense disambiguation. Computational
Linguistics, 43(1):31–70.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I.
(2017). Attention is all you need. In Advances in
neural information processing systems, pages 5998–
6008.
Wang, Y., Wang, L., Rastegar-Mojarad, M., Moon, S., Shen,
F., Afzal, N., Liu, S., Zeng, Y., Mehrabi, S., Sohn, S.,
and Liu, H. (2018). Clinical information extraction
applications: A literature review. Journal of Biomedi-
cal Informatics, 77:34 – 49.
Yadav, R. K., Jiao, L., Granmo, O.-C., and Goodwin,
M. (2021). Human-level interpretable learning for
aspect-based sentiment analysis. In The Thirty-Fifth
AAAI Conference on Artificial Intelligence (AAAI-21).
AAAI.
Yuan, D., Richardson, J., Doherty, R., Evans, C., and Al-
tendorf, E. (2016). Semi-supervised word sense dis-
ambiguation with neural models. In COLING.
Zhang, X., Jiao, L., Granmo, O.-C., and Goodwin, M.
(2020). On the convergence of Tsetlin machines
for the identity-and not operators. arXiv preprint
arXiv:2007.14268.
Zhong, Z. and Ng, H. T. (2010). It makes sense: A wide-
coverage word sense disambiguation system for free
text. In ACL.
Interpretability in Word Sense Disambiguation using Tsetlin Machine
409