Summary Evaluation Across Source Types and Gen-
res, pages 22–31.
Liu, X. P. P. (2016). Sequence-to-sequence with attention
model for text summarization (textsum).
Lloret, E. and Palomar, M. (2012). Text summarisation in
progress: a literature review. Artificial Intelligence
Review, 37(1):1–41.
Luhn, H. P. (1958). The automatic creation of literature
abstracts. IBM Journal of research and development,
2(2):159–165.
Martins, A. F. and Smith, N. A. (2009). Summarization
with a joint model for sentence extraction and com-
pression. In Proceedings of the Workshop on Integer
Linear Programming for Natural Langauge Process-
ing, pages 1–9.
Mehta, P. (2016). From extractive to abstractive summa-
rization: a journey. In Proceedings of the ACL 2016
Student Research Workshop, pages 100–106.
Miao, Y. and Blunsom, P. (2016). Language as a latent vari-
able: Discrete generative models for sentence com-
pression. arXiv preprint arXiv:1609.07317.
M
´
oro, R. et al. (2012). Personalized text summarization
based on important terms identification. In DEXA,
2012 23rd International Workshop on, pages 131–
135.
Mozer, M., Jordan, M. I., and Petsche, T., editors (1997).
Advances in Neural Information Processing Systems
9, NIPS, Denver, CO, USA, December 2-5, 1996. MIT
Press.
Nallapati, R., Zhai, F., and Zhou, B. (2017). Summarunner:
A recurrent neural network based sequence model for
extractive summarization of documents. In AAAI,
pages 3075–3081.
Nallapati, R., Zhou, B., Gulcehre, C., Xiang, B.,
et al. (2016). Abstractive text summarization us-
ing sequence-to-sequence rnns and beyond. arXiv
preprint arXiv:1602.06023.
Narayan, S., Papasarantopoulos, N., Cohen, S. B., and Lap-
ata, M. (2017). Neural extractive summarization with
side information. arXiv preprint arXiv:1704.04530.
Nema, P., Khapra, M., Laha, A., and Ravindran, B.
(2017). Diversity driven attention model for query-
based abstractive summarization. arXiv preprint
arXiv:1704.08300.
Nenkova, A., McKeown, K., et al. (2011). Automatic sum-
marization. Foundations and Trends
R
in Information
Retrieval, 5(2–3):103–233.
Nenkova, A., Passonneau, R., and McKeown, K. (2007).
The pyramid method: Incorporating human content
selection variation in summarization evaluation. ACM
Transactions on Speech and Language Processing
(TSLP), 4(2):4.
Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002).
Bleu: a method for automatic evaluation of machine
translation. In Proceedings of the 40th annual meeting
on ACL, pages 311–318.
Paris, C. (2015). User modelling in text generation.
Bloomsbury Publishing.
Parveen, D., Ramsl, H.-M., and Strube, M. (2015). Topi-
cal coherence for graph-based extractive summariza-
tion. In Proceedings of the 2015 EMNLP, pages 1949–
1954.
Rath, G., Resnick, A., and Savage, T. (1961). The formation
of abstracts by the selection of sentences. part i. sen-
tence selection by men and machines. Journal of the
Association for Information Science and Technology,
12(2):139–141.
Rush, A. M., Chopra, S., and Weston, J. (2015a). A neural
attention model for abstractive sentence summariza-
tion. arXiv preprint arXiv:1509.00685.
Rush, A. M., Chopra, S., and Weston, J. (2015b). Neural
attention model for abstractive summarization. https:
//github.com/facebookarchive/NAMAS.
Saggion, H. and Poibeau, T. (2013). Automatic text sum-
marization: Past, present and future. In Multi-source,
multilingual information extraction and summariza-
tion, pages 3–21. Springer.
See, A., Liu, P. J., and Manning, C. D. (2017). Get to
the point: Summarization with pointer-generator net-
works. arXiv preprint arXiv:1704.04368.
Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Se-
quence to sequence learning with neural networks. In
Advances in neural information processing systems,
pages 3104–3112.
Tu, Z., Lu, Z., Liu, Y., Liu, X., and Li, H. (2016). Mod-
eling coverage for neural machine translation. arXiv
preprint arXiv:1601.04811.
Vinyals, O., Fortunato, M., and Jaitly, N. (2015). Pointer
networks. In Advances in Neural Information Pro-
cessing Systems, pages 2692–2700.
Wan, X. (2010). Towards a unified approach to simultane-
ous single-document and multi-document summariza-
tions. In Proceedings of the 23rd international confer-
ence on computational linguistics, pages 1137–1145.
Wang, L., Raghavan, H., Castelli, V., Florian, R., and
Cardie, C. (2016). A sentence compression based
framework to query-focused multi-document summa-
rization. arXiv preprint arXiv:1606.07548.
Woodsend, K. and Lapata, M. (2012). Multiple aspect
summarization using integer linear programming. In
Proceedings of the 2012 Joint EMNLP and Computa-
tional Natural Language Learning, pages 233–243.
Yan, R., Nie, J.-Y., and Li, X. (2011). Summarize what you
are interested in: An optimization framework for in-
teractive personalized summarization. In Proceedings
of the EMNLP, pages 1342–1351.
Yasunaga, M., Zhang, R., Meelu, K., Pareek, A., Srini-
vasan, K., and Radev, D. (2017). Graph-based neu-
ral multi-document summarization. arXiv preprint
arXiv:1706.06681.
Yousefi-Azar, M. and Hamey, L. (2017). Text summariza-
tion using unsupervised deep learning. Expert Systems
with Applications, 68:93–105.
Automatic Text Summarization: A State-of-the-Art Review
655