Automatic Test Item Creation in Self-Regulated Learning - Evaluating Quality of Questions in a Latin American Experience

Gudrun Wesiak, Rocael Hernandez Rizzardini, Hector Amado-Salvatierra, Christian Guetl, Mohammed Smadi

2013

Abstract

The research area of self-regulated learning (SRL) has shown the importance of the learner’s role in their cognitive and meta-cognitive strategies to self-regulate their learning. One fundamental step is to self-assess the knowledge acquired, to identify key concepts, and review the understanding about them. In this paper, we present an experimental setting in Guatemala, with students from several countries. The study provides evaluation results from the use of an enhanced automatic question creation tool (EAQC) for a self-regulated learning online environment. In addition to assessment quality, motivational and emotional aspects, usability, and tasks value are addressed. The EAQC extracts concepts from a given text and automatically creates different types of questions based on either the self-generated concepts or on concepts supplied by the user. The findings show comparable quality of automatically and human generated concepts, while questions created by a teacher were in part evaluated higher than computer-generated questions. Whereas difficulty and terminology of questions were evaluated equally, teacher questions where considered to be more relevant and more meaningful. Therefore, future improvements should especially focus on these aspects of questions quality.

References

  1. Agarwal, M., Shah, R., & Mannem, P. (2011). Automatic question generation using discourse cues. In Proceedings of the Sixth Workshop on Innovative Use of NLP for Building Educational Applications: 1-9.
  2. AL-Smadi M., Hoefler M., & Guetl, C. (2011). Integrated and Enhanced e-Assessment Forms for Learning: Scenarios from ALICE Project. Proceedings of Special Track on Computer-based Knowledge & Skill Assessment and Feedback in Learning Settings ICL 2011, Piestany, Slovakia, 2011, pp. 626-631.
  3. Bannert, M. (2006), Effects of Reflection Prompts when learning with Hypermedia, Journal of Educational Computing Research, vol. 35, no 4, pp. 359-375.
  4. Brooke, J. (1996). SUS: A “quick and dirty” usability scale. In Usability evaluation in industry. London: Taylor & Francis.
  5. Brown, J. C., Frishkoff, G. A., & Eskenazi, M. (2005) “Automatic Question Generation for Vocabulary Assessment”, Proc. of the Human Language Technology Conference on Empirical Methods in Natural Language Processing: Canada, pp 819- 826.
  6. Canella, S., Ciancimino, E., & Campos, M.L. (2010) “Mixed e-Assessment: an application of the studentgenerated question technique”, Paper read at IEEE International Conference EDUCON 2010, Madrid.
  7. Capuano, N., Gaeta, M., Marengo, A., Miranda, S.,Orciuoli, F., & Ritrovato, P. (2009). LIA: an Intelligent Advisor for e-Learning. InteractiveLearning Environments, Taylor & Francis, vol. 17, no. 3, pp. 221-239.
  8. Chen, W., Aist, G., & Mostow J. (2009).Generating questions automatically from informational text. In Proceedings of the 2nd Workshop on Question Generation: 17- 24. UK: Brighton.
  9. Conati C., & Vanlehn K. (2000), “Toward computer-based support of meta-cognitive skills: A computational framework to coach self-explanation ”International Journal of Artificial Intelligence in Education, vol. 11, pp. 389-415, 2000.
  10. Dochy, F. J. R. C., & McDowell, L. (1997). “Assessment as a tool for learning.” Studies in Educational Evaluation, 23 (4), pp. 279-298.
  11. Goto, T., Kojiri, T., Watanabe, T., Iwata, T., & Yamada, T. (2010). "Automatic Generation System of Multiplechoice Cloze Questions and its Evaluation", Knowledge Management & E-Learning: An International Journal (KM&EL), Vol 2, No 3, 2010.
  12. Guetl, C., Lankmayr, K., Weinhofer, J., & Hoefler, M.(2011). Enhanced Automatic Questions Creator - EAQC: Concept, development and evaluation of an automatic test item creation tool to foster modern eeducation. Electronic Journal of e-Learning, 9, 23-38.
  13. Hoefler, M., AL-Smadi, M., & Guetl, C. (2011). Investigating content quality of automatically and manually generated questions to support self-directed learning. In D. Whitelock, W. Warburton, G. Wills, and L. Gilbert (Eds.).CAA 2011 International Conference, University of Southampton.
  14. Hoefler M., AL-Smadi M., & Guetl C. (2012). Investigating the suitability of automatically generated test items for real tests. The International Journal of eAssessment, 2(1) [Online]. Available: Doc. No.26.http://journals.sfu.ca/ijea/index.php/journal/- article/viewFile/27/26.
  15. Kay, R. H., & Loverock, S. (2008). Assessing emotions related to learning new software: The computer emotion scale. Computers in Human Behavior. 24, 1605-1623.
  16. Ko, C., & Young, S. (2011). Explore the Next Generation of Cloud-Based E-Learning Environment. Edutainment Technologies. EG LNCS, Vol. 6872, 2011, pp 107-114
  17. Kroop S., Berthold M, Nussbaumer A., & Albertet D., (2012). “Supporting Self-Regulated Learning in Personalised Learning Environments” 1st Intl Workshop on Cloud Education Environments, CEUR Vol. 945 ISSN 1613-0073, 2012.
  18. Liu, M., & Calvo, R.A. (2009). An automatic question generation tool for supporting sourcing and integration in students essays. 14th Australasian Document Computing Symposium, Sydney, Australia.
  19. Myller, N. (2007). Automatic generation of prediction questions during program visualization. Electronic Notes in Theoretical Computer Science 178: 43-49.
  20. Nicol, D.J. & Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice, Studies in Higher Education, 31(2),198-218.
  21. Pintrich, P.R., Smith, D.A.F., Garcia, T., & McKeachie, W.J. (1991). A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ). Technical Report, 91, 7-17.
  22. Villalon, J. and Calvo, R. A. (2009) “Concept Extraction from student essays, towards Concept Map Mining”, Proceedings of the 9th IEEE International Conference on Advanced Learning Technologie, pp 221-225.
  23. Zimmerman, B. J. (1989). 'Models of self-regulated learning and academic achievement.78 In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic achievement: Theory, research and practice. New York: Springer-Verlag.
Download


Paper Citation


in Harvard Style

Wesiak G., Hernandez Rizzardini R., Amado-Salvatierra H., Guetl C. and Smadi M. (2013). Automatic Test Item Creation in Self-Regulated Learning - Evaluating Quality of Questions in a Latin American Experience . In Proceedings of the 5th International Conference on Computer Supported Education - Volume 1: CSEDU, ISBN 978-989-8565-53-2, pages 351-360. DOI: 10.5220/0004387803510360


in Bibtex Style

@conference{csedu13,
author={Gudrun Wesiak and Rocael Hernandez Rizzardini and Hector Amado-Salvatierra and Christian Guetl and Mohammed Smadi},
title={Automatic Test Item Creation in Self-Regulated Learning - Evaluating Quality of Questions in a Latin American Experience},
booktitle={Proceedings of the 5th International Conference on Computer Supported Education - Volume 1: CSEDU,},
year={2013},
pages={351-360},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004387803510360},
isbn={978-989-8565-53-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 5th International Conference on Computer Supported Education - Volume 1: CSEDU,
TI - Automatic Test Item Creation in Self-Regulated Learning - Evaluating Quality of Questions in a Latin American Experience
SN - 978-989-8565-53-2
AU - Wesiak G.
AU - Hernandez Rizzardini R.
AU - Amado-Salvatierra H.
AU - Guetl C.
AU - Smadi M.
PY - 2013
SP - 351
EP - 360
DO - 10.5220/0004387803510360