ACKNOWLEDGMENTS
This work is partly supported by JSPS KAK-
ENHI, Grant Numbers: JP18K18656, JP19KK0257,
JP20H04300, and JP20H01728.
REFERENCES
Anderson, T., Rourke, L., Garrison, R., and Archer, W.
(2019). Assessing teaching presence in a computer
conferencing context. Online Learning, 5(2).
Bachtiar, F., Kamei, K., and Cooper, E. (2011). An esti-
mation model of english abilities of students based on
their affective factors in learning by neural network.
In proceedings of IFSA and AFSS International Con-
ference 2011.
Baker, R. and Yacef, K. (2009). The state of educational
data mining in 2009: A review and future visions.
Journal of Educational Data Mining, 1:3–17.
Barker, T. (2011). An automated individual feedback and
marking system: An empirical study. 9th European
Conference on eLearning 2010, ECEL 2010, 9.
Biggam, J. (2010). Using automated assessment feedback
to enhance the quality of student learning in universi-
ties: A case study. In Technology Enhanced Learning.
Quality of Teaching and Educational Reform, pages
188–194, Berlin, Heidelberg. Springer Berlin Heidel-
berg.
Butcher, K. R. and Kintsch, W. (2001). Support of con-
tent and rhetorical processes of writing: Effects on the
writing process and the written product. Cognition
and Instruction, 19(3):277–322.
Carpenter, D., Geden, M., Rowe, J., Azevedo, R., and
Lester, J. (2020). Automated analysis of mid-
dle school students’ written reflections during game-
based learning. In Bittencourt, I. I., Cukurova, M.,
Muldner, K., Luckin, R., and Mill
´
an, E., editors, Ar-
tificial Intelligence in Education, pages 67–78, Cham.
Springer International Publishing.
Chin, C. and Osborne, J. (2010). Students’ questions and
discursive interaction: Their impact on argumenta-
tion during collaborative group discussions in science.
Journal of Research in Science Teaching, 47(7):883–
908.
D’antoni, L., Kini, D., Alur, R., Gulwani, S., Viswanathan,
M., and Hartmann, B. (2015). How can automatic
feedback help students construct automata? ACM
Trans. Comput.-Hum. Interact., 22(2).
Davis, E. A. (2003). Prompting middle school science stu-
dents for productive reflection: Generic and directed
prompts. Journal of the Learning Sciences, 12(1):91–
142.
Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018).
BERT: pre-training of deep bidirectional transformers
for language understanding. CoRR, abs/1810.04805.
Dietz, B. and Hurn, J. (2013). Using learning analytics
to predict (and improve) student success: A faculty
perspective. Journal of Interactive Online Learning,
12:17–26.
Dzikovska, M., Nielsen, R., Brew, C., Leacock, C., Gi-
ampiccolo, D., Bentivogli, L., Clark, P., Dagan, I., and
Dang, H. T. (2013). SemEval-2013 task 7: The joint
student response analysis and 8th recognizing textual
entailment challenge. In Second Joint Conference
on Lexical and Computational Semantics (*SEM),
Volume 2: Proceedings of the Seventh International
Workshop on Semantic Evaluation (SemEval 2013),
pages 263–274, Atlanta, Georgia, USA. Association
for Computational Linguistics.
Fern
´
andez-Delgado, M., Cernadas, E., Barro, S., and
Amorim, D. (2014). Do we need hundreds of clas-
sifiers to solve real world classification problems?
Journal of Machine Learning Research, 15(90):3133–
3181.
Goda, K., Hirokawa, S., and Mine, T. (2013). Automated
evaluation of student comments on their learning be-
havior. In Advances in Web-Based Learning – ICWL
2013, pages 131–140, Berlin, Heidelberg. Springer
Berlin Heidelberg.
Goda, K. and Mine, T. (2011). Pcn: Quantifying learn-
ing activity for assessment based on time-series com-
ments. In Proceedings of the 3rd International Con-
ference on Computer Supported Education - Volume
2: ATTeL, (CSEDU 2011), pages 419–424. INSTICC,
SciTePress.
Ha, M., Nehm, R. H., Urban-Lurain, M., and Merrill,
J. E. (2011). Applying computerized-scoring models
of written biological explanations across courses and
colleges: Prospects and limitations. CBE—Life Sci-
ences Education, 10(4):379–393.
Hume, A. and Coll, R. (2009). Assessment of learning,
for learning, and as learning: New zealand case stud-
ies. Assessment in Education: Principles, Policy and
Practice, 16.
Jiang, Y., Syed, S. J., and Golab, L. (2016). Data mining of
undergraduate course evaluations. INFORMATICS IN
EDUCATION, 15:85–102.
Jordan, S. (2012). Student engagement with assessment and
feedback: Some lessons from short-answer free-text
e-assessment questions. Computers and Education,
58(2):818–834.
Kroeze, K. A., van den Berg, S. M., Lazonder, A. W., Veld-
kamp, B. P., and de Jong, T. (2019). Automated feed-
back can improve hypothesis quality. Frontiers in Ed-
ucation, 3:116.
Le, Q. V. and Mikolov, T. (2014). Distributed rep-
resentations of sentences and documents. CoRR,
abs/1405.4053.
Liu, O. L., Rios, J. A., Heilman, M., Gerard, L., and Linn,
M. C. (2016). Validation of automated scoring of
science assessments. Journal of Research in Science
Teaching, 53(2):215–233.
Macfadyen, L. and Dawson, S. (2010). Mining lms data to
develop an “early warning system” for educators: A
proof of concept. Computers and Education, 54:588–
599.
Maier, U., Wolf, N., and Randler, C. (2016). Effects of a
computer-assisted formative assessment intervention
based on multiple-tier diagnostic items and different
feedback types. Computers and Education, 95:85–98.
CSEDU 2021 - 13th International Conference on Computer Supported Education
24