The idea is to try to find the tasks of the model
starting with their Beginning. Each possible begin-
ning is identified in the proposal. Next, the rest f the
task is sought. In the end, a final procedure deter-
mines if the proposal corresponds to the model and
what mark (or distance) can be assigned. This proce-
dure operates as a function of matches found and task
descriptors. Thus, only tasks with a descriptor in the
model that allows them to be absent may be missing.
Ditto for the descriptors on the order of tasks. The
mark given to the proposal will be maximum if the
match is complete and does not require the flexibility
allowed by the descriptors.
4 FIRST EXPERIMENTAL
RESULTS
To test our approach, we worked with copies of sec-
ond year computer science exam preparatory school
for science and technology Annaba, Algeria, on the
following exercise: write a program that checks
whether the elements a table (integer) are consecu-
tive or not (for example, the elements 4, 5, 6, 7, 8 are
consecutive while the elements 1, 3, 4, 5, 6 are not).
After manual correction of 22 copies, we could
extract 7 possible proposal templates (correct and in-
correct). With these models, we analyzed 72 new
copies automatically (TD from 4 different groups).
The average rate of recognition obtained was 56%
with a similar recognition rate for the different groups
of copies and it was similar for correct copies and in-
correct copies.
From models and copies of Groups 1, 2 and 3
we crossed the recognition results with analysis ”by
hand” carried out by three teachers. The agreements
between the different analyzes were large majority
(35 complete agreements, ie 66% of cases where all
the judges had the same opinion on the choice of a
model or the lack of recognition).
For the 31 copies recognized in groups 1, 2 and
3, an analysis of the assigned rating was performed.
Overall, a third of copies (11) received the same mark
(mark given on 6); for half the copies (15), the mark
received showed a difference of one point with the
score on the day of the exam; in 5 cases (16%), the
note differed from 2 or 3 points (6 points).
5 CONCLUSION, PERSPECTIVES
We have presented a method for recognition of the
learners algorithms. Our method takes advantage of
the application context to set up and use a basic al-
gorithm proposals models. The models are enriched
with information allowing the use of effective techni-
cal of program comprehension used in software en-
gineering, and scoring proposals algorithms from the
identified model and the distance between this model
and the proposal. An initial experiment from exam
papers gave interesting recognition rate (over 50%),
similar to the ”hand” recognition rate and marks for
the recognized copies closed to manual ratings. To
improve the rating, a promising approach would be
to combine the recognition algorithm with dynamic
algorithm for assessing the proposals based on test
cases (Bouhineau, 2013). Beyond the notation, we
also believe we can combine the models information
on the knowledge and skills relating to each task and
sub-tasks and enrich the assessment.
REFERENCES
Bouhineau, D. (2013). Utilisation de traits s
´
emantiques
pour une m
´
ethodologie de construction d’un syst
`
eme
d’aide dans un eiah de l’algorithmique. In EIAH 2013-
6e conf
´
erence sur les Environnements Informatiques
pour l’Apprentissage Humain, pages 141–152. IRIT
Press 2013.
Chen, P. M. (2004). An automated feedback system for
computer organization projects. Education, IEEE
Transactions on, 47(2):232–240.
Corbi, T. A. (1989). Program understanding: Challenge for
the 1990s. IBM Systems Journal, 28(2):294–306.
Mengel, S. A. and Yerramilli, V. (1999). A case study of
the static analysis of the quality of novice student pro-
grams. In ACM SIGCSE Bulletin, volume 31, pages
78–82. ACM.
Michaelson, G. (1996). Automatic analysis of functional
program style. In aswec, page 38. IEEE.
Selfridge, P. G., Waters, R. C., and Chikofsky, E. J. (1993).
Challenges to the field of reverse engineering. In
Reverse Engineering, 1993., Proceedings of Working
Conference on, pages 144–150. IEEE.
Simkin, M. G. and Kuechler, W. L. (2005). Multiple-choice
tests and student understanding: What is the connec-
tion? Decision Sciences Journal of Innovative Educa-
tion, 3(1):73–98.
Sitthiworachart, J. and Joy, M. (2004). Effective peer as-
sessment for learning computer programming. In
ACM SIGCSE Bulletin, volume 36, pages 122–126.
ACM.
CSEDU 2016 - 8th International Conference on Computer Supported Education
76