REFERENCES
Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). Latent
dirichlet allocation. Journal of machine Learning re-
search, 3(Jan):993–1022.
Celano, G. G., Richter, M., Voll, R., and Heyer, G. (2018).
Aspect coding asymmetries of verbs: the case of rus-
sian.
Cohen, J. (2008). Trusses: Cohesive subgraphs for social
network analysis. National security agency technical
report, 16:3–1.
Hale, J. (2001). A probabilistic earley parser as a psycholin-
guistic model. In Proceedings of the second meet-
ing of the North American Chapter of the Association
for Computational Linguistics on Language technolo-
gies, pages 1–8. Association for Computational Lin-
guistics.
Horch, E. and Reich, I. (2016). On “article omission” in
german and the “uniform information density hypoth-
esis”. Bochumer Linguistische Arbeitsberichte, page
125.
Hulth, A. (2003). Improved automatic keyword extraction
given more linguistic knowledge. In Proceedings of
the 2003 conference on Empirical methods in natural
language processing, pages 216–223. Association for
Computational Linguistics.
Huo, H. and Liu, X. H. (2014). Automatic summarization
based on mutual information. In Applied Mechanics
and Materials, volume 513, pages 1994–1997. Trans
Tech Publ.
Jaeger, T. F. (2010). Redundancy and reduction: Speakers
manage syntactic information density. Cognitive psy-
chology, 61(1):23–62.
Jaeger, T. F. and Levy, R. P. (2007). Speakers optimize
information density through syntactic reduction. In
Advances in neural information processing systems,
pages 849–856.
Kaimal, R. et al. (2012). Document summarization using
positive pointwise mutual information. arXiv preprint
arXiv:1205.1638.
Krifka, M. (2008). Basic notions of information structure.
Acta Linguistica Hungarica, 55(3-4):243–276.
Levy, R. (2008). Expectation-based syntactic comprehen-
sion. Cognition, 106(3):1126–1177.
Liu, R. and Nyberg, E. (2013). A phased ranking model for
question answering. In Proceedings of the 22nd ACM
international conference on Information & Knowl-
edge Management, pages 79–88. ACM.
Marujo, L., Bugalho, M., Neto, J. P. d. S., Gershman, A.,
and Carbonell, J. (2013). Hourly traffic prediction of
news stories. arXiv preprint arXiv:1306.4608.
Mihalcea, R. and Tarau, P. (2004). Textrank: Bringing or-
der into text. In Proceedings of the 2004 conference
on empirical methods in natural language processing,
pages 404–411.
¨
Ozg
¨
ur, A.,
¨
Ozg
¨
ur, L., and G
¨
ung
¨
or, T. (2005). Text catego-
rization with class-based and corpus-based keyword
selection. In International Symposium on Computer
and Information Sciences, pages 606–615. Springer.
Pal, A. R., Maiti, P. K., and Saha, D. (2013). An ap-
proach to automatic text summarization using sim-
plified lesk algorithm and wordnet. International
Journal of Control Theory and Computer Modeling
(IJCTCM), 3(4/5):15–23.
Piantadosi, S. T., Tily, H., and Gibson, E. (2011). Word
lengths are optimized for efficient communication.
Proceedings of the National Academy of Sciences,
108(9):3526–3529.
Richter, M., Kyogoku, Y., and K
¨
olbl, M. (2019a). Estima-
tion of average information content: Comparison of
impact of contexts. In Proceedings of SAI Intelligent
Systems Conference, pages 1251–1257. Springer.
Richter, M., Kyogoku, Y., and K
¨
olbl, M. (2019b). Inter-
action of information content and frequency as pre-
dictors of verbs’ lengths. In International Confer-
ence on Business Information Systems, pages 271–
282. Springer.
Rietdorf, C., K
¨
olbl, M., Kyogoku, Y., and Richter, M.
(2019). Summarisation by information maps. a pilot
study.
Rooth, M. (1985). Association with focus.
Rooth, M. (1992). A theory of focus interpretation. Natural
language semantics, 1(1):75–116.
Salton, G. and Buckley, C. (1988). Term weighting ap-
proaches in automatic text retrieval, information pro-
cessing and management, vol. 24.
Shannon, C. E. (1948). A mathematical theory of commu-
nication. Bell System Technical Journal, 27:379–423.
Sparck Jones, K. (1972). A statistical interpretation of term
specificity and its application in retrieval. Journal of
documentation, 28(1):11–21.
Tixier, A., Malliaros, F., and Vazirgiannis, M. (2016). A
graph degeneracy-based approach to keyword extrac-
tion. In Proceedings of the 2016 Conference on
Empirical Methods in Natural Language Processing,
pages 1860–1870.
Witten, I. H., Paynter, G. W., Frank, E., Gutwin, C., and
Nevill-Manning, C. G. (2005). Kea: Practical auto-
mated keyphrase extraction. In Design and Usability
of Digital Libraries: Case Studies in the Asia Pacific,
pages 129–152. IGI Global.
Yang, Z. and Nyberg, E. (2015). Leveraging procedural
knowledge for task-oriented search. In Proceedings
of the 38th International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 513–522. ACM.
Zhang, Q., Wang, Y., Gong, Y., and Huang, X. (2016).
Keyphrase extraction using deep recurrent neural net-
works on twitter. In Proceedings of the 2016 confer-
ence on empirical methods in natural language pro-
cessing, pages 836–845.
NLPinAI 2020 - Special Session on Natural Language Processing in Artificial Intelligence
464