ple of 30 sketches were taken from the set. In these
30 sketches, there was 1 false positive and no false
negatives. This confirms that the tool is be able to de-
tect design smells in different kinds of PROCESSING
sketches.
8 CONCLUSIONS AND FUTURE
WORK
This paper applied the concept of design smells to
PROCESSING. The new design smells that we intro-
duced relate to common practice by novice program-
mers, as well as the PROCESSING community. In
addition it identified relevant object-oriented smells,
that also apply to PROCESSING. We showed the rele-
vance of these new and existing smells to PROCESS-
ING code, by manual analysis of novice code and code
by the PROCESSING community. We found that a
majority of programs by novices and by the commu-
nity contain at least some PROCESSING related design
smell. We found that these are caused by poor under-
standing of application design in general, or lack of
attention to design.
For the eight identified design smells, we imple-
mented customised checks in PMD. These proposed
rules were checked against the manually analyzed sets
of PROCESSING sketches to estimate the false posi-
tive and false negative rate. They were then applied
to a new set of code to demonstrate their wide appli-
cability. The results show that the proposed way of
detecting design smells performs well on the code ex-
amples used in this study. This analysis also revealed
that even course material and textbook examples ex-
hibit, to a somewhat surprising extend, design smells.
This work produced along the way also the first
static analysis tool for PROCESSING. It created an au-
tomated pipeline, defined new rules, and customized
existing rules, all to accommodate PROCESSING spe-
cific requirements.
This study has introduced a selected set of design
smells that apply to PROCESSING. In the future, more
research on design smells will be needed to further
develop design guidelines for PROCESSING. Design
smells are used in software development practice to
guide refactoring of code. Similarly, we need refac-
toring techniques for PROCESSING code, including a
benchmark of well structured programs. This should
be accompanied by a review of existing teaching re-
sources, to avoid that unnecessary smells set a poor
example.
This paper presents a tool for automated detec-
tion, and discusses its accuracy and applicability. Fu-
ture research has to investigate the most effective use
of these tools; whether students should use them di-
rectly, or only teaching assistants, to help them with
providing feedback, how frequently to use them, and
if and how to intergrade them into peer review, assess-
ment, or grading.
REFERENCES
Aivaloglou, E. and Hermans, F. (2016). How kids code and
how we know: An exploratory study on the scratch
repository. In ICER ’16, New York, NY, USA. ACM.
Blau, H. (2015). Frenchpress gives students automated
feedback on java program flaws (abstract only). In
SIGCSE ’15, New York, NY, USA. ACM.
Blok, T. and Fehnker, A. (2016). Automated program anal-
ysis for novice programmers. In HEAd17. Universitat
Politecnica de Valencia.
de Man, R. (2017). The smell of poor design. In
26th Twente Student Conference on IT. University of
Twente.
Goetz, B. (2006). Java concurrency in practice. Addison-
Wesley, Upper Saddle River, NJ.
Hermans, F. and Aivaloglou, E. (2016). Do code smells
hamper novice programming? A controlled experi-
ment on scratch programs. In ICPC 2016, pages 1–10.
Hermans, F., Stolee, K. T., and Hoepelman, D. (2016).
Smells in block-based programming languages. In
2016 IEEE Symposium on Visual Languages and
Human-Centric Computing (VL/HCC).
Higgins, C. A., Gray, G., Symeonidis, P., and Tsintsifas,
A. (2005). Automated assessment and experiences
of teaching programming. J. Educ. Resour. Comput.,
5(3).
Keuning, H., Jeuring, J., and Heeren, B. (2016). Towards
a systematic review of automated feedback generation
for programming exercises. In ITiCSE ’16, New York,
NY, USA. ACM.
Lahtinen, E., Ala-Mutka, K., and J
¨
arvinen, H.-M. (2005).
A study of the difficulties of novice programmers.
SIGCSE Bull., 37(3).
Lanza, M. (2006). Object-oriented metrics in practice: us-
ing software metrics to characterize, evaluate, and im-
prove the design of object-oriented systems. Springer,
Berlin New York.
Lelli, V., Blouin, A., Baudry, B., Coulon, F., and Beau-
doux, O. (2017). Automatic detection of GUI de-
sign smells: The case of blob listener. arXiv preprint
arXiv:1703.08803.
Okun, V., Delaitre, A., and Black, P. E. (2013). Report on
the static analysis tool exposition (SATE) IV. NIST
Special Publication, 500:297.
Reas, C. and Fry, B. (2010). Getting Started with Process-
ing. Maker Media, Inc.
Stegeman, M., Barendsen, E., and Smetsers, S. (2014). To-
wards an empirically validated model for assessment
of code quality. In Koli Calling ’14, pages 99–108,
New York, NY, USA. ACM.
Suryanarayana, G., Samarthyam, G., and Sharma, T.
(2014). Refactoring for Software Design Smells:
Managing Technical Debt. Morgan Kaufmann Pub-
lishers Inc., San Francisco, CA, USA, 1st edition.
The Smell of Processing
431