helps both students and teaching staff focus
on more relevant discussions;
• due to a number of technologies used in these
assignments, a set of validators integrated
with LMS makes it easier for students to
check their assignments, instead of using
validation services one by one;
• validations can help create a successful
environment configuration (for instance,
validation of XML configuration files for the
application server);
• some validators have been set to be more
sensitive and report more detailed warnings
than a typical compiler would. This was
effective in cases where students used newer
compiler and runtime versions (e.g. Java)
than those available on our servers, as
additional warnings would give students a
hint where to start looking for problems.
6 CONCLUSIONS
The experiences presented here give us a good
starting point to continue using and upgrading
ORVViS. The students are satisfied with provided
help, while the staff can obtain relevant information
on the students’ behavior in solving the assignment
problems. Whenever we have asked the students for
some kind of help related to the system, such as
testing, they did it enthusiastically, as they consider
it to be beneficial and time-saving.
The downside is that students start to depend on
it, and stop validating their solutions by themselves,
which was observed in our initial experience report
(Bosnic et al. 2010): the number of successful
submissions dropped significantly after a few days
the system was unavailable. Even with such systems
in place, the students should be capable of creating
valid solutions without help from the system.
It should be noted that ORVViS currently does
not support grading nor checking the most of the
assignment’s complex semantic requirements (posed
in a natural language), and currently focuses mainly
on syntax. Such features are a well-worth asset and
we plan to extend the system in the future. However,
concerning the main course objective, the
complexity of content taught, and focus on
understanding the underlying open processes, we
feel the need to discuss the solutions face-to-face.
Additional future work consists of integrating the
staff / administrator interface with Moodle LMS as
well. We expect that the additional APIs and plugin
tools, available in new Moodle 2.x version, would
simplify the task of integrating two systems.
ACKNOWLEDGEMENTS
This work is supported by the Croatian Ministry of
Science, Education and Sport, under the research
project ZP0361965 “Software Engineering in
Ubiquitous Computing”.
We would like to thank our former student Darko
Ronić and other students under his supervision for
their work on ORVViS application.
REFERENCES
Ala-Mutka, K.M., 2005. A Survey of Automated Assess-
ment Approaches for Programming Assignments.
Computer Science Education, 15(2), p.83-102.
Auffarth, B. et al., 2008. System for Automated Assis-
tance in Correction of Programming Exercises (SAC)
Bosnic, I., Orlic, M. & Zagar, M., 2010. Beyond LMS:
Expanding Course Experience with Content
Collaboration and Smart Assignment Feedback.
International Journal of Emerging Technologies in
Learning iJET, 5(4).
Cosma, G. & Joy, M., 2006. Source-code plagiarism: A
UK academic perspective. I Can, (422), p.74.
Dick, M. et al., 2003. Addressing student cheating. ACM
SIGCSE Bulletin, 35(2), p.172.
Edwards, S.H. & Perez-Quinones, M.A., 2008. Web-CAT:
automatically grading programming assignments.
ITiCSE 08 Proceedings of the 13th annual conference
on Innovation and technology in computer science
education, 3(3), p.60558-60558.
Goel, S. & Rao, D., 2005. Plagiarism and its Detection in
Programming Languages. Environment.
Higgins, C. et al., 2003. The CourseMarker CBA system:
Improvements over Ceilidh. Education and
Information Technologies, 8(3), p.287–304.
Ihantola, P. et al., 2010. Review of recent systems for
automatic assessment of programming assignments. In
Proceedings of the 10th Koli Calling International
Conference on Computing Education Research Koli
Calling 10. ACM Press, pp. 86-93.
Joy, M., Griffiths, N. & Boyatt, R., 2005. The BOSS
online submission and assessment system. Journal on
Educational Resources in Computing, 5(3), p.2.
Lancaster, T. & Culwin, F., 2004. A Comparison of
Source Code Plagiarism Detection Engines. Computer
Science Education, 14(2), p.101-112.
Sheard, J. & Dick, M., 2003. Influences on cheating
practice of graduate students in IT courses: what are
the factors? In Proceedings of the 8th annual
conference on Innovation and technology in computer
science education. ACM, p. 49.
Tomić, S. et al., 2006. Living The E-Campus Dream. In A.
Szucs & I. Bo, eds. Proceedings of the EDEN
Conference. Vienna, Austria: European Distance and
E-Learning Network, pp. 644-650.
Wagner, N., 2004. Plagiarism by student programmers.
San Antonio, TX, USA.
CSEDU2012-4thInternationalConferenceonComputerSupportedEducation
154