
una legge didistribuzione. Giorn Dell’inst Ital Degli
Att, 4:89–91.
Lee, B. W., Jang, Y. S., and Lee, J. (2021). Pushing on
text readability assessment: A transformer meets hand-
crafted linguistic features. In Moens, M.-F., Huang, X.,
Specia, L., and Yih, S. W.-t., editors, Proceedings of
the 2021 Conference on Empirical Methods in Natu-
ral Language Processing, pages 10669–10686, Online
and Punta Cana, Dominican Republic. Association for
Computational Linguistics.
Lee, B. W. and Lee, J. (2023). LFTK: Handcrafted features
in computational linguistics. In Kochmar, E., Burstein,
J., Horbach, A., Laarmann-Quante, R., Madnani, N.,
Tack, A., Yaneva, V., Yuan, Z., and Zesch, T., editors,
Proceedings of the 18th Workshop on Innovative Use
of NLP for Building Educational Applications (BEA
2023), pages 1–19, Toronto, Canada. Association for
Computational Linguistics. Version: 1.0.9.
Lhoest, Q., Villanova del Moral, A., Jernite, Y., Thakur, A.,
von Platen, P., Patil, S., Chaumond, J., Drame, M., Plu,
J., Tunstall, L., Davison, J., Šaško, M., Chhablani, G.,
Malik, B., Brandeis, S., Le Scao, T., Sanh, V., Xu, C.,
Patry, N., McMillan-Major, A., Schmid, P., Gugger,
S., Delangue, C., Matussière, T., Debut, L., Bekman,
S., Cistac, P., Goehringer, T., Mustar, V., Lagunas, F.,
Rush, A., and Wolf, T. (2021). Datasets: A community
library for natural language processing. In Proceedings
of the 2021 Conference on Empirical Methods in Nat-
ural Language Processing: System Demonstrations,
pages 175–184, Online and Punta Cana, Dominican
Republic. Association for Computational Linguistics.
Lugea, J. and Walker, B. (2023). Stylistics: Text, Cognition
and Corpora. Palgrave Macmillan Cham.
Lyu, Y., Liang, P. P., Pham, H., Hovy, E., Póczos,
B., Salakhutdinov, R., and Morency, L.-P. (2021).
StylePTB: A compositional benchmark for fine-
grained controllable text style transfer. In Toutanova,
K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D.,
Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T.,
and Zhou, Y., editors, Proceedings of the 2021 Confer-
ence of the North American Chapter of the Association
for Computational Linguistics: Human Language Tech-
nologies, pages 2116–2138, Online. Association for
Computational Linguistics.
Lyu, Y., Luo, T., Shi, J., Hollon, T., and Lee, H. (2023).
Fine-grained text style transfer with diffusion-based
language models. In Can, B., Mozes, M., Cahyawijaya,
S., Saphra, N., Kassner, N., Ravfogel, S., Ravichan-
der, A., Zhao, C., Augenstein, I., Rogers, A., Cho, K.,
Grefenstette, E., and Voita, L., editors, Proceedings
of the 8th Workshop on Representation Learning for
NLP (RepL4NLP 2023), pages 65–74, Toronto, Canada.
Association for Computational Linguistics.
McDonald, D. D. and Pustejovsky, J. (1985). A computa-
tional theory of prose style for natural language gener-
ation. In Second Conference of the European Chapter
of the Association for Computational Linguistics.
Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M., Socher,
R., Amatriain, X., and Gao, J. (2024). Large language
models: A survey.
paperswithcode.com (2024). Sentence completion on hel-
laswag. accessed on 2024-05-29.
Projects, T. P. (2024). Jinja - jinja documentation (3.1.x).
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.,
et al. (2018). Improving language understanding by
generative pre-training. accessed on 2024-05-29.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and
Sutskever, I. (2019). Language models are unsuper-
vised multitask learners. accessed on 2024-05-29.
See, A., Liu, P. J., and Manning, C. D. (2017). Get to the
point: Summarization with pointer-generator networks.
In Proceedings of the 55th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 1:
Long Papers), pages 1073–1083, Vancouver, Canada.
Association for Computational Linguistics.
Smirnov, N. V. (1939). On the estimation of the discrepancy
between empirical curves of distribution for two inde-
pendent samples. Bull. Math. Univ. Moscou, 2(2):3–14.
Sterne, L., New, J., New, M., and Ricks, C. (2003). The Life
and Opinions of Tristram Shandy, Gentleman. Penguin
classics. Penguin Books Limited.
Toshevska, M. and Gievska, S. (2022). A review of text style
transfer using deep learning. IEEE Transactions on
Artificial Intelligence, 3(5):669–684.
Verma, G. and Srinivasan, B. V. (2019). A lexical, syntactic,
and semantic perspective for understanding style in
text. arXiv preprint arXiv:1909.08349.
Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M.,
Reddy, T., Cournapeau, D., Burovski, E., Peterson, P.,
Weckesser, W., Bright, J., van der Walt, S. J., Brett, M.,
Wilson, J., Millman, K. J., Mayorov, N., Nelson, A.
R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., Po-
lat,
˙
I., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde,
D., Perktold, J., Cimrman, R., Henriksen, I., Quintero,
E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H.,
Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Con-
tributors (2020). SciPy 1.0: Fundamental Algorithms
for Scientific Computing in Python. Nature Methods,
17:261–272. Version: 1.13.1.
Wilde, O. (1909). Poems: With the Ballad of Reading Gaol.
Methuen & Company.
Yang, H., Zhang, Y., Xu, J., Lu, H., Heng, P.-A., and Lam,
W. (2024). Unveiling the generalization power of fine-
tuned large language models. In Duh, K., Gomez, H.,
and Bethard, S., editors, Proceedings of the 2024 Con-
ference of the North American Chapter of the Associa-
tion for Computational Linguistics: Human Language
Technologies (Volume 1: Long Papers), pages 884–899,
Mexico City, Mexico. Association for Computational
Linguistics.
Zhang, H., Song, H., Li, S., Zhou, M., and Song, D.
(2023). A survey of controllable text generation using
transformer-based pre-trained language models. ACM
Comput. Surv., 56(3).
Zhang, X., Zhao, J., and LeCun, Y. (2015). Character-
level convolutional networks for text classification. In
Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., and
Garnett, R., editors, Advances in Neural Information
Processing Systems, volume 28. Curran Associates,
Inc.
Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y.,
Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang,
C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang,
X., Liu, Z., Liu, P., Nie, J.-Y., and Wen, J.-R. (2023).
A survey of large language models.
LLM Output Compliance with Handcrafted Linguistic Features: An Experiment
775