who have used ASKER are computer scientists,
but the use of ASKER does not require computer
skills. In other uses, the authors were professors of
physics, chemistry or optics. The latter has taken
charge of this tool in complete autonomy.
To conclude, all the feedback from the use of ASKER
in different contexts allows us to consider that the tool
meets the needs of both teachers and learners.
7 CONCLUSION AND
PROSPECTS
In this article we introduced ASKER, a tool that
enables teachers to create self-assessment exercises
for their students. This tool can be used for distance
learning or as a complement to face-to-face teaching.
It enables the creation of exercises (matching,
groupings, short open-ended questions, MCQ) that
can be used to evaluate learning in many fields. To
create exercises to assess a concept, the teacher
defines a model of exercises that will enable the
generation of various exercises, using text or image
resources. The possibility for the learner to request
the generation of several exercises from the same
model enables her to self-assess repeatedly on the
same concepts, without the teacher having to
repeatedly define many exercises.
Our research hypothesis was that using
constraints on the exercises to be generated allowed
both to obtain a sufficient variety of exercises for
learners to train and self-assess, while requiring less
work for the author teacher. The evaluation results,
reported in Section 6, allow us to validate this
research hypothesis.
ASKER is a tool that can be used in a variety of
fields, and in a variety of learning contexts, at any
level. It thus offers many possibilities of use. Its main
limitation is that there is no explicit representation in
ASKER of the knowledge to be learned. The
acquisition of this knowledge therefore represents a
major challenge. The main users of ASKER being the
authors, it would be interesting for them to build the
domain knowledge, as they already do for formulas.
The system could assist them in this task by proposing
a generalization of the information that they provide
to create their models of exercises. We intend to use
activity traces of teachers using ASKER to enable the
system to assist them in this elicitation of domain
knowledge.
We also envisage the use of a particular meta-data
describing the skills mobilized by a model of
exercise, so that we can propose to the learner an open
profile of skills that will enable her to be more
involved in her self-assessment process, for example
by setting objectives to be achieved. Such skills
profiles will also enable us to propose to the student a
learning and training path that will enable her to
achieve such objectives.
REFERENCES
Auzende, O., Giroire, H, Le Calvez, F. (2007). Extension
of IMS-QTI to express constraints on template
variables in mathematics exercises. In 13th
International Conference on Artificial Intelligence in
Education - AIED, Los Angeles, USA, 524-526.
Baker, R.S. (2016). Stupid tutoring systems, intelligent
humans. International Journal of Artificial Intelligence
in Education, 26(2), 600–614.
Bouhineau, D., Chaachoua, H., Nicaud, J.F. (2008).
Helping teachers generate exercises with random
coefficients. Int. journal of continuing engineering
education and life-long learning 18(5-6), 520–5337.
Delozanne, E., Grugeon, B., Prévit, D., Jacoboni, P. (2003).
Supporting teachers when diagnosing their students in
algebra. In Workshop Advanced Technologies for
Mathematics Education Artificial Intelligence in
Education, AIED, Sydney, Australia, 461–470.
Dermeval, D., Paiva, R., Bittencourt, I. et al. (2018).
Authoring Tools for Designing Intelligent Tutoring
Systems: a Systematic Review of the Literature. Int.
Journal of AI in Education 28(3), 336-384.
Jean-Daubias, S., Guin, N. (2009). AMBRE-teacher: a
module helping teachers to generate problems. In 2nd
Workshop on Question Generation, AIED, 43-47.
Lefevre, M., Jean-Daubias, S., Guin, N. (2009). Generation
of pencil and paper exercises to personalize learners'
work sequences: typology of exercises and meta-
architecture for generators. E-Learn 2843-2848.
Lefevre, M., Guin, N., Jean-Daubias, S. (2012). A Generic
Approach for Assisting Teachers During
Personalization of Learners' Activities. Workshop
PALE, UMAP 35-40
Murray, T. (1999). Authoring intelligent tutoring systems:
an analysis of the state of the art. International Journal
of Artificial Intelligence in Education, 10, 98–129.
Murray, T. (2003). An overview of intelligent tutoring
system authoring tools: updated analysis of the state of
the art. In Authoring tools for advanced technology
learning environments, Springer, 491–544.
Mostow, J., Beck, J.-E., Bey, J., Cuneo, A., Sison, J., Tobin,
B., Valeri, J. (2004). Using automated questions to
assess reading comprehension, vocabulary and effects
of tutorial interventions. Technology, Instruction,
Cognition and Learning, Vol. 2, 97–134.
Steffens, K. (2006). Self-Regulated Learning in
Technology-Enhanced Learning Environments: lessons
of a European peer review. European Journal of
Education 41, 353–379.
Woolf, B.P. (2010). Building intelligent interactive tutors:
student-centered strategies for revolutionizing e-
learning. Morgan Kaufmann.