structure in students’ scientific explanations. Artificial
Intelligence in Education, pages 1–39.
Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer,
T. K., and Harshman, R. (1990). Indexing by latent
semantic analysis. Journal of the American Society
for Information Science, 41(6):391–407.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K.
(2019). BERT: Pre-training of deep bidirectional
transformers for language understanding. In The Con-
ference of the North American Chapter of the Associa-
tion for Computational Linguistics: Human Language
Technologies, page 4171–4186.
Ding, N., Qin, Y., Yang, G., Wei, F., Yang, Z., Su, Y.,
Hu, S., Chen, Y., Chan, C.-M., Chen, W., et al.
(2023). Parameter-efficient fine-tuning of large-scale
pre-trained language models. Nature Machine Intelli-
gence, 5(3):220–235.
Doogan, C. and Buntine, W. (2021). Topic Model or Topic
Twaddle? Re-evaluating Semantic Interpretability
Measures. In North American Association for Com-
putational Linguistics 2021, pages 3824–3848.
Egger, R. and Yu, J. (2022). A topic modelling compar-
ison between LDA, NMF, Top2Vec, and BERTopic
to demystify twitter posts. Frontiers in Sociology,
7:886498.
Grootendorst, M. (2022). BERTopic: Neural topic mod-
eling with a class-based TF-IDF procedure. arXiv
preprint arXiv:2203.05794.
Harrando, I., Lisena, P., and Troncy, R. (2021). Apples to
apples: A systematic evaluation of topic models. In
The International Conference on Recent Advances in
Natural Language Processing, pages 483–493.
He, P., Liu, X., Gao, J., and Chen, W. (2020). DeBERTa:
Decoding-enhanced BERT with disentangled atten-
tion. arXiv preprint arXiv:2006.03654.
Hoyle, A., Goel, P., Hian-Cheong, A., Peskov, D., Boyd-
Graber, J., and Resnik, P. (2021). Is Automated Topic
Model Evaluation Broken?: The Incoherence of Co-
herence. Advances in neural information processing
systems, 34:2018–2033.
Hujala, M., Knutas, A., Hynninen, T., and Arminen, H.
(2020). Improving the quality of teaching by utilis-
ing written student feedback: A streamlined process.
Computers & Education, 157:103965.
Lee, D. D. and Seung, H. S. (1999). Learning the parts of
objects by non-negative matrix factorization. Nature,
401(6755):788–791.
Malladi, S., Gao, T., Nichani, E., Damian, A., Lee, J. D.,
Chen, D., and Arora, S. (2023). Fine-tuning language
models with just forward passes. In Advances in Neu-
ral Information Processing Systems, volume 36, pages
53038–53075.
Masala, M., Ruseti, S., Dascalu, M., and Dobre, C. (2021).
Extracting and clustering main ideas from student
feedback using language models. In International
Conference on Artificial Intelligence in Education,
pages 282–292. Springer.
McInnes, L., Healy, J., Astels, S., et al. (2017). hdb-
scan: Hierarchical density based clustering. Journal
of Open Source Software, 2(11):205.
McInnes, L., Healy, J., and Melville, J. (2018).
UMAP: Uniform manifold approximation and pro-
jection for dimension reduction. arXiv preprint
arXiv:1802.03426.
M
¨
uller, M., Salath
´
e, M., and Kummervold, P. E. (2023).
COVID-Twitter-BERT: A natural language process-
ing model to analyse COVID-19 content on Twitter.
Frontiers in artificial intelligence, 6:1023281.
Oliveira, G., Grenha Teixeira, J., Torres, A., and Morais,
C. (2021). An exploratory study on the emer-
gency remote education experience of higher educa-
tion students and teachers during the COVID-19 pan-
demic. British Journal of Educational Technology,
52(4):1357–1376.
O’Callaghan, D., Greene, D., Carthy, J., and Cunningham,
P. (2015). An analysis of the coherence of descriptors
in topic modeling. Expert Systems with Applications,
42(13):5645–5657.
Sharifian-Attar, V., De, S., Jabbari, S., Li, J., Moss, H.,
and Johnson, J. (2022). Analysing longitudinal social
science questionnaires: Topic modelling with BERT-
based embeddings. In 2022 IEEE International Con-
ference on Big Data (Big Data), pages 5558–5567.
Stevanovi
´
c, A., Bo
ˇ
zi
´
c, R., and Radovi
´
c, S. (2021). Higher
education students’ experiences and opinion about
distance learning during the Covid-19 pandemic.
Journal of Computer Assisted Learning, 37(6):1682–
1693.
Sun, J. and Yan, L. (2023). Using topic modeling to un-
derstand comments in student evaluations of teaching.
Discover Education, 2:1–12.
Sung, C., Dhamecha, T. I., and Mukhi, N. (2019). Improv-
ing short answer grading using transformer-based pre-
training. In International Conference on Artificial In-
telligence in Education, pages 469–481.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I.
(2017). Attention is all you need. Advances in neural
information processing systems, 30.
Waheeb, S. A., Khan, N. A., and Shang, X. (2022). Topic
modelling and sentiment analysis of online educa-
tion in the COVID-19 era using social networks-based
datasets. Electronics, 11(5):715.
Wang, T., Lu, K., Chow, K. P., and Zhu, Q. (2020).
COVID-19 sensing: negative sentiment analysis on
social media in china via BERT model. IEEE Access,
8:138162–138169.
Xu, W. W., Tshimula, J. M., Dub
´
e,
´
E., Graham, J. E.,
Greyson, D., MacDonald, N. E., and Meyer, S. B.
(2022). Unmasking the Twitter discourses on masks
during the COVID-19 pandemic: User cluster–based
BERT topic modeling approach. JMIR Infodemiology,
2(2):e41198.
Zhao, H., Phung, D., Huynh, V., Jin, Y., Du, L., and
Buntine, W. (2021). Topic modelling meets deep
neural networks: a survey. In The AAAI Inter-
national Joint Conference on Artificial Intelligence
2021, pages 4713–4720.
Comparative Analysis of Topic Modelling Approaches on Student Feedback
233