Program Analysis and Evaluation using Quimera

Daniela Fonte, Ismael Vilas Boas, Daniela da Cruz, Alda Lopes Gancarski, Pedro Rangel Henriques


During last years, a new challenge rose up inside the programming communities: the programming contests. Programming contests can vary slightly in the rules but all of them are intended to assess the competitor skills concerning the ability to solve problems using a computer. These contests raise up three kind of challenges: to create a nice problem statement (for the members of the scientific committee); to solve the problem in a good way (for the programmers); to find a fair way to assess the results (for the judges). This paper presents a web-based application, QUIMERA intended to be a full programming-contest management system, as well as an automatic judge. Besides the traditional dynamic approach for program evaluation, QUIMERA still provides static analysis of the program for a more fine assessment of solutions. Static analysis takes profit from the technology developed for compilers and language-based tools and is supported by source code analysis and software metrics.


  1. Cheang, B., Kurnia, A., Lim, A., and Oon, W.-C. (2003). On automated grading of programming assignments in an academic institution. Comput. Educ., 41:121- 131.
  2. Danic, M., Rados?evic, D., and Orehovac?ki, T. (2011). Evaluation of student programming assignments in online environments. CECiiS: Central European Conference on Information and Intelligent Systems.
  3. Forsythe, G. E. and Wirth, N. (1965). Automatic grading programs. Technical report, Stanford University.
  4. Leal, J. P. (2003). Managing programming contests with Mooshak. Software-Practice & Experience.
  5. Leal, J. P. and Moreira, N. (1999). Automatic Grading of Programming Exercises. page 383.
  6. Leal, J. P. and Silva, F. (2008). Using Mooshak as a Competitive Learning Tool. The 2008 Competitive Learning Symposium.
  7. Patil, A. (2010). Automatic grading of programming assignments. Master's projects, Department of Computer Science, San José State University.
  8. Rahman, K., Nordin, M., and Che, W. (2008). Automated programming assessment using pseudocode comparison technique: Does it really work?
  9. Tiantian, W., Xiaohong, S., Peijun, M., Yuying, W., and Kuanquan, W. (2009). Autolep: An automated learning and examination system for programming and its application in programming course. In First International Workshop on Education Technology and Computer Science, USA.
  10. Wang, T., Su, X., Ma, P., Wang, Y., and Wang, K. (2011). Ability-training-oriented automated assessment in introductory programming course. Comput. Educ., 56:220-226.
  11. Zamin, N., Mustapha, E. E., Sugathan, S. K., Mehat, M., and Anuar, E. (2006). Development of a web-based automated grading system for programming assignments using static analysis approach.

Paper Citation

in Harvard Style

Fonte D., Vilas Boas I., da Cruz D., Lopes Gancarski A. and Rangel Henriques P. (2012). Program Analysis and Evaluation using Quimera . In Proceedings of the 14th International Conference on Enterprise Information Systems - Volume 2: ICEIS, ISBN 978-989-8565-11-2, pages 209-219. DOI: 10.5220/0004001702090219

in Bibtex Style

author={Daniela Fonte and Ismael Vilas Boas and Daniela da Cruz and Alda Lopes Gancarski and Pedro Rangel Henriques},
title={Program Analysis and Evaluation using Quimera},
booktitle={Proceedings of the 14th International Conference on Enterprise Information Systems - Volume 2: ICEIS,},

in EndNote Style

JO - Proceedings of the 14th International Conference on Enterprise Information Systems - Volume 2: ICEIS,
TI - Program Analysis and Evaluation using Quimera
SN - 978-989-8565-11-2
AU - Fonte D.
AU - Vilas Boas I.
AU - da Cruz D.
AU - Lopes Gancarski A.
AU - Rangel Henriques P.
PY - 2012
SP - 209
EP - 219
DO - 10.5220/0004001702090219