An Interactive Tutoring System for Training Software Refactoring
Thorsten Haendler, Gustaf Neumann and Fiodor Smirnov
Institute for Information Systems and New Media,
Vienna University of Economics and Business (WU), Austria
Keywords:
Intelligent Tutoring System, Software Refactoring, Software Design, Code Visualization, Software-
engineering Education and Training, Unified Modeling Language (UML2), Interactive Training Environment.
Abstract:
Although considered useful and important, software refactoring is often neglected in practice because of the
perceived risks and difficulties of performing it. A way to address these challenges can be seen in promoting
developers’ practical competences. In this paper, we propose an approach for an interactive training environ-
ment for addressing practical competences in software refactoring. In particular, we present a tutoring system
that provides interactive feedback to the users (e.g., university students or software developers) regarding the
software-design quality and the functional correctness of the (modified) source code. After each code modi-
fication (refactoring step), the user can review the results of run-time regression tests and compare the actual
software design (as-is) with the targeted design (to-be) in order to check quality improvement. For this pur-
pose, structural and behavioral diagrams of the Unified Modeling Language (UML2) representing the as-is
software design are automatically reverse-engineered from source code. The to-be design diagrams (also in
UML) can be pre-specified by the instructor. We illustrate the usability of the approach for training compe-
tences in refactoring via short application scenarios and describe exemplary learning paths. Moreover, we
provide a web-based software-technical implementation in Java (called refacTutor) to demonstrate the techni-
cal feasibility of the approach. Finally, limitations and further potential of the approach are discussed.
1 INTRODUCTION
As a result of time pressure in software projects, prior-
ity is often given to implementing new features rather
than ensuring code quality (Martini et al., 2014). In
the long run, this leads to software aging (Parnas,
1994) and increased technical debt with the conse-
quence of increased maintenance costs on the soft-
ware system (i.e., debt interest) (Kruchten et al.,
2012). A popular technique for code-quality assur-
ance is software refactoring, which aims at improving
code quality by restructuring the source code while
preserving the external system behavior (Opdyke,
1992; Fowler et al., 1999). Several kinds of flaws
(such as code smells) can negatively impact code
quality and are thus candidates for software refac-
toring (Alves et al., 2016). Besides kinds of smells
that are relatively simple to identify and to refactor
(e.g., stylistic code smells), others are more complex
and difficult to identify, to assess and to refactor, such
as smells in software design and architecture (Fowler
et al., 1999; Suryanarayana et al., 2014; Nord et al.,
2012) Although refactoring is considered useful and
important, it is often neglected in practice due to sev-
eral barriers, among which are (perceived by software
developers) the difficulty of performing refactoring,
the risk of introducing an error into a previously cor-
rectly working software system, and a lack of ade-
quate tool support (Tempero et al., 2017). These bar-
riers pose challenges with regard to improving refac-
toring tools and promoting the skills of developers.
On the one hand, in last years, several tools have
been proposed that apply static analysis for identify-
ing smells via certain metrics and benchmarks (e.g.,
level of coupling between system elements) or sup-
porting in planning and performing the actual refac-
toring (steps), such as JDeodorant (Tsantalis et al.,
2008) and DECOR (Moha et al., 2010), as well as
for measuring and quantifying their impact in terms
of technical debt, such as SonarQube (Campbell and
Papapetrou, 2013) or JArchitect (CoderGears, 2018).
However, for more complex kinds of smells (e.g.,
on the level of software design and architecture) the
smell detection and refactoring is difficult. Thus,
these tools only cover a modest amount of smells
(Fernandes et al., 2016; Fontana et al., 2012) and
tend to produce false positives (Fontana et al., 2016)
(which, i.a., represent constructs intentionally used by
Haendler, T., Neumann, G. and Smirnov, F.
An Interactive Tutoring System for Training Software Refactoring.
DOI: 10.5220/0007801101770188
In Proceedings of the 11th International Conference on Computer Supported Education (CSEDU 2019), pages 177-188
ISBN: 978-989-758-367-4
Copyright
c
2019 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
177
the developer with symptoms similar to smells, such
as certain design patterns). In addition, the decision
what and how to refactor also depends on domain
knowledge (e.g., regarding design rationale) provided
by human experts such as software architects. Due to
these issues, the refactoring process is still challeng-
ing and requires human expertise.
On the other hand, so far only little attention has
been paid in research to education and training in
the field of software refactoring. Besides textbooks
with best practices and rules on how to identify and
to remove smells via refactoring, e.g., (Fowler et al.,
1999), there are only a few approaches (see Section
6) for supporting developers in accomplishing or im-
proving practical and higher-level competences such
as application, analysis and evaluation according to
Bloom’s taxonomy of cognitive learning objectives;
see, e.g., (Bloom et al., 1956; Krathwohl, 2002).
In this paper, we propose an approach for an in-
teractive learning environment for training practical
competences in software refactoring. In particular,
we present a tutoring system that provides interac-
tive feedback and decision-support regarding both the
software-design quality and the functional correctness
of the source code modified by the user (see Fig. 1).
User
code modi cation
(refactoring step)
feedback on
code quality and software behavior
(e.g., as-is vs. to-be design and test result)
refacTutor
Figure 1: Exercise interaction supported by tutoring system.
Driven by a refactoring task defined by the in-
structor, after each refactoring step, the user can re-
view the results of run-time regression tests (e.g., pre-
specified by the instructor; in order to check that
no error has been introduced) and compare the ac-
tual software design (as-is) with the intended de-
sign (to-be; in order to check quality improvement).
For this purpose, structural and behavioral software-
design diagrams of the Unified Modeling Language
(UML2) (Object Management Group, 2015) repre-
senting the as-is software design are automatically
reverse-engineered from source code. The to-be de-
sign diagrams (also in UML) can be pre-specified
by the instructor. UML is the de-facto standard for
modeling and documenting structural and behavioral
aspects of software design. There is evidence that
UML design diagrams are beneficial for understand-
ing software design (issues) (Arisholm et al., 2006;
Scanniello et al., 2018; Haendler, 2018). This way,
the approach primarily addresses the refactoring of
issues on the level of software design and architec-
ture. Moreover, the code visualization supports in
building and questioning users’ mental models, see,
e.g., (Ca
˜
nas et al., 1994; George, 2000). In addition,
the system optionally reports on other quality aspects
provided by quality-analysis tools (Campbell and Pa-
papetrou, 2013).
In order to illustrate the usability for teaching
and training refactoring competences, we describe
exemplary learning paths, specify the corresponding
exercise-interaction workflow and draw exemplary
application scenarios (based on different learning ob-
jectives and tasks and with corresponding settings
(configuration options) for decision support and feed-
back. We also introduce a proof-of-concept imple-
mentation in Java (called refacTutor
1
) in terms of a
web-based development environment. Its application
is demonstrated via a short exercise example. In the
following Section 2, a short conceptual overview of
the proposed approach is given.
2 CONCEPTUAL OVERVIEW
Fig. 2 depicts an overview of the tutoring-system in-
frastructure in terms of technical components and ar-
tifacts used, created or modified by instructor and/or
user respectively.
Diagram
Builder
Source
Code
As-is Design
(UML Diagrams)
To-be Design
(UML Diagrams)
Test Framework
Quality
Analyzer
User/Student
Hints
Test
Speci cation
Task
Test
Result
1
2
4
Instructor
uses / reviews
creates / modi es
Legend
is kind of
compares
analyzes
speci es
modi es
«abstract»
Analysis Tool
3
automatically
analyzes
6
5
analyzes
automatically
creates
Figure 2: Overview of key components and artifacts of in-
teractive tutoring system for training software refactoring.
1
See Section 5. The software prototype is available for
download from http://refactoringgames.com/refactutor
CSEDU 2019 - 11th International Conference on Computer Supported Education
178
Focused on certain learning objectives (for details,
also see Sections 3 and 4), the instructor at first spec-
ifies a task, prepares the initial source code for ex-
ercise, specifies (or re-uses an existing) test script,
and specifies the to-be software design (see step
1
in Fig. 2). The user’s exercise then starts by ana-
lyzing the task (step
2
). After each code modifi-
cation (step
3
), the user can review several kinds
of feedback created by different analysis tools that
analyze the code (see steps
4
to
6
). These ana-
lyzers consist of a test framework for checking the
functional correctness (test result), a quality analyzer
that examines the code using pre-configured quality
metrics (for providing hints on quality issues), and a
diagram builder that derives (reverse-engineers) dia-
grams from source code and reflects the current as-is
software design. This way, the tutoring system sup-
ports a cyclic exercise workflow of steps
3
to
6
.
Structure. The remainder of this paper is struc-
tured as follows. In Section 3, the applied conceptual
framework of the interactive tutoring system (includ-
ing competences and learning paths, exercises, code
modification as well as feedback mechanisms) are
elaborated in detail. Section 4 illustrates the usability
of the training environment for two exercise scenarios
(i.e., design refactoring and test-driven development).
Then, in Section 5, we present a proof-of-concept im-
plementation in Java; including a detailed exercise ex-
ample. In Section 6, related approaches are discussed.
Section 7 reflects limitations and further potential of
the approach and Section 8 concludes the paper.
3 INTERACTIVE TUTORING
ENVIRONMENT
Fig. 3 gives an overview of important conceptual as-
pects for training software refactoring. The model is
structured into four layers that concretize from an ex-
emplary learning and training path
A
(built on exem-
plary environments) via the exercises performed by
the user
B
to refactoring paths and system states
C
and finally views on system states
D
(feedback and
decision support). The corresponding aspects are de-
scribed below in detail.
3.1 Learning Objectives and Paths
The learning activity sequence (IMS Global Consor-
tium, 2003) (depicted on the top layer
A
in Fig.
3) illustrates an exemplary learning path for soft-
ware refactoring by sequence multiple exercise units,
Exercises
Exemplary Training
Environments
and Paths
Refactoring
Paths &
System States
State Views
Exercise
Initial
State
Transitional States
Passing
State
Unit
Lecture
Unit Unit Unit
Paper-based
Exercises
Unit Unit
Training in
Tutoring System
Unit Unit
Training in
Serious Game
Non-passing
State
Code
Modi cations
Code
Test
Behavior
Design
Structrue
Design
Behavior
Task Interaction Assessment
A
B
C
D
Figure 3: Layers of training refactoring via tutoring system
from exemplary learning paths with refactoring exercises to
applied feedback mechanism.
which base on different learning environments. The
path can be driven by certain learning objectives, e.g.
oriented to Bloom’s taxonomy of cognitive learning
objectives (Krathwohl, 2002). For instance, via lec-
ture based on best practices and rules provided by
a refactoring textbook, e.g., (Suryanarayana et al.,
2014), basic competences (i.e., knowledge and com-
prehension) may be addressed, e.g., the (theoretical)
understanding of rules for identifying refactoring can-
didates and for performing refactoring steps. For ap-
plying the comprehended techniques, e.g., units with
paper-based exercises (PBE) can follow. In order to
actually apply and improve practical competences on
executable code (via editor), a tutoring system then
can be used (see below). Moreover, via playing seri-
ous games (providing or simulating real-world condi-
tions, such as a large code base) (Haendler and Neu-
mann, 2019), further higher-level competences can be
acquired such as analysis and evaluation (e.g., while
developing and applying refactoring strategies for pri-
oritizing smell candidates and refatoring options).
This exemplary path along different training environ-
ments demonstrates that the proposed tutoring-system
approach has to be seen as a complement to other ap-
proaches, each focusing on certain competence levels.
3.2 Structure of Exercises
A unit might generally consist of several exercises.
Each exercise of the proposed tutoring system (see
the second layer
B
in Fig. 3) has a certain sequence
consisting of the task (instruction) defined by the in-
structor, the exercise interaction of the user with the
tutoring system and the exercise assessment, which
will be explained in detail below.
An Interactive Tutoring System for Training Software Refactoring
179
Decision-Making
Mental Model
1st
loop
2nd loop
Identi
cation of options and planning
of refactoring steps
Confrontation with task de ned by
the instructor
Implementation
Evaluation/
Analysis
Problem
Recognition
User's conceptions on refactoring
(e.g., based on previous training units)
Performing refactoring steps
(code modi cation)
Analysis of eects via provided views
and feedback mechanism
Problem
Analysis
Analysis of the task and the investigation
of the provided views (source code)
Figure 4: Refactoring-exercise workflow as double-loop
learning and decision-making cycle supported by the tutor-
ing system.
(1) The task defines the actual instructions to be per-
formed by the user. The instructions might be ori-
ented to e.g. a challenge to be solved (e.g., to re-
move a certain smell), a goal to be achieved (e.g.,
to realize a certain design structure), or instruc-
tions for concrete modification actions (e.g., to
perform an EXTRACTMETHOD refactoring). For
examples with further details, also see Section 4).
(2) The exercise interaction represents a cycle of
analysis and actions performed by the user which
is built on feedback provided by the tutoring sys-
tem. Fig. 4 represents the workflow of the exercise
interaction in terms of a double-loop learning and
decision-making cycle supported by the approach;
also see (Haendler and Frysak, 2018). Driven by
the specific task (problem recognition) and based
on the user’s mental model, the user analyses the
task and investigates the provided views (e.g., de-
sign diagrams). After analysis, the user identifies
options and plans for refactoring (decision mak-
ing). In this course, the user selects a certain op-
tion for code modification and plans the particu-
lar steps for performing them (implementation).
After the code has been modified, the user ana-
lyzes the feedback given by the tutoring system
(evaluation). This workflow supports a double-
loop learning (Argyris, 1977). Through previous
learning and practice experiences, the users have
built a certain mental model (Ca
˜
nas et al., 1994)
on performing software refactoring, on which the
analysis and the decisions for planned modifica-
tions are built (first loop), e.g. how to modify the
code structure in order to remove the code or de-
sign smell. After modification, the feedback (e.g.
in terms of test result, the comparison of the as-
is with the to-be design diagrams) impacts the
user’s mental model and allows for scrutinizing
and adapting the decision-making rules (second
loop). This way, the cycle allows an (inter-) active
learning for promoting practical competences.
(3) The exercise assessment provides means for as-
sessing the outcome of the user’s actions in terms
of feedback for both the instructor as well as the
user. Basically, it is checked whether the as-is de-
sign conforms the to-be design and whether the
tests pass. The verification of the design qual-
ity can be performed manually by the instructor
(by visually comparing the diagrams) or automat-
ically by using analysis tools to check for differ-
ences between the diagrams; or applying addi-
tional quality-analysis tools (see below). More-
over, the time for completing the exercise can be
measured.
3.3 Code Modifications and System
States
During exercise interaction, the system under analysis
(SUA) can take different system states (see
C
in Fig.
3). The states can be classified into the initial state
(S
INITIAL
), the final state(s) (i.e., S
FINAL(PASSING)
and
S
FINAL(NON-PASSING)
) and multiple transitional states
(S
TRANSITIONAL
). In particular, the initial state S
INITIAL
represents the system at the beginning (e.g. as pre-
pared by the instructor). Each actual or possible mod-
ification to be performed on a certain state then leads
to a certain following state (S
TRANSITIONAL
). Depend-
ing on the kind of task and smell, there are differ-
ent ways of achieving the defined exercise goal. As
mentioned above, for more complex smells several
options and sequences for achieving a desired design
can exist, such as for some types of software-design
smells (Suryanarayana et al., 2014). This way, the
options and steps of possible modifications can be
represented via a graph structure (i.e. modifications
as edges, states as nodes). The performance of one
user (via several refactoring steps starting from and/or
leading to a S
TRANSITIONAL
) then draws a concrete path
through this graph. The final state can either be pass-
ing (S
FINAL(PASSING)
) (i.e., fulfilling the quality stan-
dards, e.g., defined by to-be design diagrams, and
passing the tests) or failing (S
FINAL(NON-PASSING)
). For
each state, several aspects are important for refactor-
ing which are reflected by selected views (see below).
3.4 Views on System States
Oriented to viewpoint models applied in software ar-
chitecture such as (Kruchten, 1995), we apply a view-
point model for refactoring that includes the follow-
ing four views on system states, i.e. code, test be-
CSEDU 2019 - 11th International Conference on Computer Supported Education
180
havior as well as design structure and behavior (see
D
in Fig. 2). The views conform to the viewpoint
model depicted in Fig. 5 and serve as feedback and
decision support for the user on each system state. In
particular, there is evidence that for identifying and
assessing software design smells (such as ABSTRAC-
TION or MODULARIZATION smells), a combination
of UML class diagrams (providing inter-class cou-
plings evoked by method-call dependencies) and se-
quence diagrams (representing run-time scenarios) is
appropriate (Haendler, 2018). The viewpoint model
depicted in Fig. 5 reflects the views on system states.
Source
Code
Test
Behavior
Structure
Behavior
(Interactions)
Software
Design
tests correct
funtionality
abstracts
from
abstracts
from
comparison
as-is
to-be
Figure 5: Applied viewpoint model for refactoring.
In addition to the source code (modified by the
user), the other views are provided by applying the
following tools and techniques.
The test behavior is specified (in terms of a test
script) using a certain test framework (e.g., XU-
nit or scenario-based tests). By performing the
tests after each (important) modification (regres-
sion testing) the functional correctness of the code
is ensured. Feedback is returned to the user in
terms of the test result.
For automatically deriving (a.k.a. reverse-
engineering) the design diagrams representing
the as-is software design of the source code,
(automatic) static and dynamic analysis tech-
niques are applied, see, e.g., (Richner and
Ducasse, 1999; Kollmann et al., 2002; Haendler
et al., 2015). The Unified Modeling Language
(UML)(Object Management Group, 2015) is
the de-facto standard for documenting software
design. For representing design structure, UML
class diagrams are extracted from source code;
for design behavior, UML sequence diagrams
are derived from execution traces. For further
details on the derivation techniques and the
resulting diagrams, see Section 5.
In addition, also software-quality analyzers (e.g.,
SonarQube (Campbell and Papapetrou, 2013))
can be used to identify quality issues (such as
smells) as well as to measure and quantify the
technical debt score, which both then can be pre-
sented as hints or feedback to the user.
4 APPLICATION SCENARIOS
In order to illustrate the feasibility of the proposed
tutoring-system approach for teaching and training
software refactoring, two possible application scenar-
ios are described. In Table 1, exemplary tasks with
corresponding values of views and other exercise as-
pects are specified for two scenarios, i.e., (a) design
refactoring and (b) test-driven development (TDD).
Table 1: Exemplary exercise scenarios: (a) design refactor-
ing and (b) test-driven development.
Aspects
and Views
Design Refactoring Test-driven Develop-
ment (TDD)
Task Refactor the given
(smelly) source code in
order to achieve the de-
fined to-be design
Implement the given to-
be design in code so that
also the given tests pass
Target Com-
petence
Analyze software design
and plan & perform refac-
toring steps for removing
a given smell
Apply the red–green-
refactor steps (to realize
the defined to-be design
and test behavior)
Prerequisite
Competence
Knowledge on refac-
toring options for given
smell type as well as on
notations and meaning
of UML diagrams
Knowledge on refac-
toring options for given
smell type as well as on
notations and meaning
of UML diagrams
Code
(S
IN ITIAL
)
Code with design smells No code given
As-is-Design
(S
IN ITIAL
)
Representing design
smells
No as-is-design available
As-is-Design
(S
FIN AL(PAS SING)
)
Conforming to to-be de-
sign
Conforming to to-be de-
sign
Tests Tests pass in (S
IN ITIAL
)
and in (S
FIN AL(PAS SING)
)
Tests fail in (S
IN ITIAL
), but
pass in (S
FIN AL(PAS SING)
)
Assessment As-is and to-be design
are identical and all tests
pass
As-is and to-be design
are identical and all tests
pass
4.1 Design Refactoring
First, consider the instructor aims at fostering the
practical competences of analyzing the software de-
sign and performing refactoring steps (i.e., applica-
tion and analysis, see target competences in Table 1).
In this case, a possible training scenario can be real-
ized by presenting a piece of source code that behaves
as intended (i.e., runtime tests pass), but with smelly
design structure (i.e., as-is design and to-be design
are different). The user’s task then is to refactor the
given code in order to realize the specified targeted
design. As important prerequisite, the user needs to
bring along the competences to already (theoretically)
know the rules for refactoring and to analyze UML di-
agrams. Fig. 6 depicts the values of different system
states while performing the exercise (also see Section
3.3 and 3.4). At the beginning (S
INITIAL
), the tests
An Interactive Tutoring System for Training Software Refactoring
181
pass, but code and design are smelly by containing,
e.g., a MULTIFACETEDABSTRACTION smell (Surya-
narayana et al., 2014). During the (path of) refac-
torings (e.g., by applying corresponding EXTRACT-
CLASS and/or MOVEMETHOD / MOVEFIELD refac-
torings (Fowler et al., 1999)), the values of the views
can differ (box in the middle). Characteristic for a
non-passing final state or the transitional states is that
at least one view (e.g., test, design structure or behav-
ior) does not meet the requirements. In turn, a passing
final state fulfills the requirements for all view values
(box on the right-hand side).
Code Test
Design
Struct.
Design
Behav.
Code Test
Design
Struct.
Design
Behav.
Code Test
Design
Struct.
Design
Behav.
S
initial
S
transitional
S
nal(passing)
or or
or or
Code
Modi cations
Figure 6: States with view values for the exercise scenario
on design refactoring.
4.2 Test-driven Development
Another application example can be seen in test-
driven development. Consider the situation that the
user shall improve her competences in performing the
red–green–refactor cycle typical for test-driven de-
velopment (Beck, 2003), which also has been iden-
tified as challenging for teaching and training (Mu-
gridge, 2003). This cycle includes the activities to
specify and run tests that reflect the intended behav-
ior (which first do not pass; red), then implement (ex-
tend) the code to pass the tests (i.e., green), and finally
modify the code in order to improve design quality
(i.e., refactor). For this purpose, corresponding run-
time tests and the intended to-be design are prepared
by the instructor. The user’s task then is to realize the
intended behavior and design by implementing and
modifying the source code. Fig. 7 depicts the view
values of the different system states.
Code Test
Design
Struct.
Design
Behav.
Code Test
Design
Struct.
Design
Behav.
Code Test
Design
Struct.
Design
Behav.
S
initial
S
transitional
S
nal(passing)
or or
or or
Code
Modi cation
Figure 7: States with view values for the exercise scenario
on test-driven development.
Besides these two exemplary scenarios, multiple
other scenarios can be specified by varying the values
of the exercise aspects and views (see Table 1).
5 SOFTWARE PROTOTYPE
In order to demonstrate the technical feasibility, we
introduce our software prototype refacTutor
2
that re-
alizes key aspects of the infrastructure presented in
Fig. 2. In the following, we focus on applied tech-
nologies, the derivation of UML class diagrams re-
flecting the as-is design, the graphical user interface
(GUI), and a concrete exercise example.
5.1 Applied Technologies
We implemented a software prototype as a web-based
application using Java Enterprise Edition (EE) to
demonstrate the feasibility of the proposed concept.
Java is also the supported language for the refac-
toring exercises. As editor for code modification
the Ace Editor (Ajax.org, 2019) has been integrated,
which is a flexible browser-based code editor written
in JavaScript. After test run, the entered code is then
passed to the server for further processing. For com-
piling the Java code, we applied InMemoryJavaCom-
piler (Trung, 2017), a GitHub repository that provides
a sample of utility classes allowing compilation of
Java sources in memory. The compiled classes are
then analyzed using Java Reflection (Forman and For-
man, 2004) in order to extract information on code
structure. The correct behavior is verified via JU-
nit tests (Gamma et al., 1999). Relevant exercise in-
formation are stored in XML files including task de-
scription, source code, test script, and information for
UML diagrams (see below).
5.2 Design-diagram Derivation
For automatically creating the as-is design diagrams,
the extracted information is transferred to an inte-
grated diagram editor. PlantUML (Roques, 2017) is
an open-source and Java-based UML diagram editor
that allows for passing plain text in terms of a sim-
ple DSL for creating graphical UML diagrams (such
as class and sequence diagrams), i.a. in PNG or SVG
format. PlantUML also manages the efficient and aes-
thetic composition of diagram elements.
Fig. 8 depicts an application example of source
code to be refactored (left-hand side, Listing 1) with
2
The software prototype is available for download from
http://refactoringgames.com/refactutor
CSEDU 2019 - 11th International Conference on Computer Supported Education
182
Listing 1: Exemplary Java code.
1 public class Ba n kA c co u n t {
2 private Int e g er nu m b er ;
3 private Doub l e b a la n ce ;
4 private Limit l im i t ; //(1)
5 public Ba n k A c co u n t ( Integ e r n u m b e r , Do u bl e ba l an c e ) {
6 this. n um b e r = num b e r ;
7 this. b al a nc e = b a la n c e ;
8 }
9 [ . . .]
10 }
11 public class Ch e c k in g A c co u n t extends B a n k Ac c o u n t { //(2)
12 [ . . .]
13 }
14 public class Limi t {
15 [ . . .]
16 }
17 public class Tr a ns a ct i o n {
18 public Boo l e an tr a ns f er ( I n t e ge r s e n de r N um b e r , In t eg e r
re c e i v e rN u m b e r , Doub l e a m oun t ) {
19 Ba n k A c c o un t s e nde r = new Ban k Ac c o u n t (12 3 4 5 6 ) ; //(3)
20 Ba n k A c c o un t r e ce i ve r = new Ban k A c c o u nt (23 4 5 6 7 ) ;
21 if( s en d e r . g et B al a n c e () >= amou n t ){
22 re c ei v e r . i n c r ea s e ( a m ou n t );
23 se n d er . d ec r ea s e ( a m ou n t );
24 return true;
25 } else {return false; }
26 }
27 }
Figure 8: Listing with exemplary Java code fragment for a refactoring task (left-hand side) and UML class diagram (right-hand
side) representing the as-is design automatically derived from source code and visualized using PlantUML.
as-is design diagram (reflecting the current state of
code) in terms of a UML class diagrams (right-
hand side). In particular, the (derived) class dia-
grams provide associations in terms of references to
other classes. For example, see
1
in the diagram
and line 4 in the listing in Fig. 8). Generalizations
represent is-a relationships (between sub-classes and
super-classes); see
2
) in the diagram and line 11
in the listing. Moreover, the derived class diagrams
also provide call (or usage) dependencies between
classes (see
3
), which, e.g., represent inter-class
calls of attributes or methods from within a method
(see lines 19 and 20 in the listing in Fig. 8). For de-
riving UML sequence diagrams, we apply dynamic
reverse-engineering techniques based on the execu-
tion traces triggered by the runtime tests, already de-
scribed and demonstrated in (Haendler et al., 2015)
and (Haendler et al., 2017). As explained in Section
3.4, a combination of UML class and sequence dia-
grams can support in identifying and assessing issues
in software design or architecture (Haendler, 2018).
5.3 GUI
As outlined in the overview depicted in Fig. 2, the
prototype provides GUI perspectives for instructors
configuring and preparing exercises and for users
performing the exercises. Each role has a specific
browser-based dashboard with multiple views (a.k.a.
widgets) on different artifacts (as described in Fig.
2). Fig. 9 depicts the user’s perspective with provided
views and an exercise example (which is detailed in
Section 5.4). In particular, the user’s perspective com-
prises a view on the task description (as defined by the
instructor; see
1
in Fig. 9). In
2
, the tests scripts of
the (tabbed) JUnit test cases are presented. The con-
sole output in
3
reports on the test result and addi-
tional hints. The code editor (see
4
in 9) provides the
source code to be modified by the user (with tabs for
multiple source files). Via the button in
5
, the user
can trigger the execution of the runtime tests. For both
the to-be design (defined by the instructor beforehand;
see
6
) and the actual as-is design (automatically de-
rived after and while test execution and reflecting the
state of the source code; see
7
), also tabs for class
and sequence diagrams are provided. The teacher’s
perspective is quite similar; in addition, it provides an
editor for specifying the to-be diagrams.
5.4 Exercise Example
In the following, a short exercise example applying
refacTutor is illustrated in detail. In particular, the
source code, runtime tests, the task to be performed by
the user and the corresponding as-is and to-be design
diagrams are presented.
Source Code. For this exemplary exercise, the fol-
lowing Java code fragment (containing basic classes
and functionality for a simple banking application)
has been prepared by the instructor (see Listing 2 and
also the editor
4
in Fig. 9).
An Interactive Tutoring System for Training Software Refactoring
183
Figure 9: Screenshot of user’s perspective with views provided by the refacTutor prototype implementation.
Listing 2: Java source code.
1 public class Ba n kA c co u n t {
2 private Int e g er nu m b er ;
3 private Doub l e b a la n ce ;
4 private Limit l im i t ;
5 public Ba n k A c co u n t ( Integ e r n u m b e r , Do u bl e ba l an c e ) {
6 this. n um b e r = num b e r ;
7 this. b al a nc e = b a la n c e ;
8 }
9 public Int e g er ge t N u m b e r () {
10 return this. n u mb e r ;
11 }
12 public Doub l e g et B al a nc e () {
13 return this. b a la n ce ;
14 }
15 public void i n cr e as e ( Doubl e am o unt ) {
16 this. b al a nc e + = a mo u nt ;
17 }
18 public void d e cr e as e ( Doubl e am o unt ) {
19 this. b al a nc e -= amo u nt ;
20 }
21 }
22 public class Limi t {
23 private Doub l e a m ou n t ;
24 public Doub l e g et A mo u nt () {
25 return this. a m ou n t ;
26 }
27 }
28 public class Tr a ns a ct i o n {
29 public Boo l e an tr a ns f er ( I n t e ge r s e n de r N um b e r ,
In t e g e r r e ce i v er N um b e r , D o u bl e am o u nt ) {
30 Ba n k A c c o un t s e nde r = new Ban k Ac c o u n t
(1 2 34 5 6 , 2 0 00 . 00 ) ;
31 Ba n k A c c o un t r e ce i ve r = new Ban k A c c o u nt
(2 3 4 5 6 7 ,1 5 0. 0 0 ) ;
32 if( s en d e r . g et B al a n c e () >= amou n t ){
33 re c ei v e r . i n c r ea s e ( a m ou n t );
34 se n d er . d ec r ea s e ( a m ou n t );
35 return true;
36 } else {return false; }
37 }
38 }
Tests In this example, four JUnit test cases have
been specified by the instructor (see Listing 3 and also
2
and the views
3
in Fig. 9). Two out of these cases
fail at the beginning (S
INITIAL
).
Listing 3: Test result.
1 4 test c ase s wer e ex e cu t ed ( 0. 2 6 4 s )
2 Test case 1 " t r a n sf e r _ d e fa u l t " failed.
3 Test case 2 " t r a n s f e r _l i m i t E x c e e de d " failed.
4 Test case 3 " c h e c k B a l an c e _ d e fa u l t " passed.
5 Test case 4 " a c c o u n t C o n s t ru c t o r _ d e f a u l t " passed.
One of these failing cases is transfer default, which is
specified in the script in Listing 4.
Listing 4: Test case 1 transfer default.
1 @test
2 public void tr a n s fe r _ d e f au l t () {
3 B an k Ac c o u n t ac c ou n tS = new B a n kA c c o u n t (00 2 34 5 7 32 0 1 ,
20 0 0. 0 0 ) ;
4 B an k Ac c o u n t ac c ou n tR = new B a n kA c c o u n t (00 1 73 9 4 87 2 5 ,
15 0 0. 0 0 ) ;
5 T ra n sa c t i o n tra n s = new Tra n sa c t i o n ( a c c o untS ,
ac c ou n t R ) ;
6 assertTrue( t r a ns . t ra n sf e r (300 . 0 0 ) ) ;
7 assertEquals( a c c o u n t S . g et B al a n c e () ,1700) ;
8 assertEquals( a c c o u n t R . g et B al a n c e () ,1800) ;
9 }
Task For the exercises, the user is confronted with
the task presented in Listing 5 (also see
1
in Fig. 9).
Listing 5: Task description.
1 M od i f y t h e giv e n co d e ex a mp l e so tha t a l l te s t s p a s s
and the g i ve n de s i gn ( as - i s ) m a tc h e s t h e ta r ge t ed
de s i gn ( to - b e ) .
2 In p a rti c u lar , th i s i n c lu d es t h e f o ll o wi n g
mo d i f ic a t i on s :
3 ( 1 ) Mo d i fy cla s s " Tran s ac t i o n " acco r di n g to the
fo l lo w in g a s pe c t s
4 (a ) A d d at t r i b u t e s " s e nd e r " a n d " r ec e iv e r " b oth
typ e d by cl a s s " Bank A c c o u n t " to cla s s "
Tr a n s a c t io n ".
5 (b ) A d d a p a ra m e t ri z e d c o ns t r u ct o r w i th p ar a me t e r s "
se n d er " and " r e c e i v e r " .
6 (c ) M od i f y m e th o d t r an s fe r to only co m p r i s e o n e
pa r am e te r ( " a m ou n t " of t y p e D o u bl e ) .
7 ( 2 ) Add a class " C he c k i n g A c c o u nt " , wh i c h s h ou l d be a
su b cl a s s of " B a nk A c c o u n t " .
CSEDU 2019 - 11th International Conference on Computer Supported Education
184
Design Diagrams. In the course of the exercise (af-
ter each code modification), the user can review the
diagrams reflecting the actual design derived from
code and compare them with the targeted design di-
agrams (see Fig.10; also see
6
and
7
in Fig. 9).
Figure 10: Contrasting as-is (automatically derived from
source code; left-hand side) and to-be software design (tar-
geted design specified by the instructor; right-hand side)
both represented as UML class diagrams.
6 RELATED WORK
Related work can be roughly divided into the follow-
ing two groups: (1) interactive tutoring systems for
programming, especially those leveraging program
visualization (in terms of UML diagrams) and (2) ap-
proaches for teaching refactoring, especially with fo-
cus on software design.
6.1 Tutoring Systems for Programming
Interactive learning environments in terms of editor-
based web-applications such as Codecademy (Sims
and Bubinski, 2018) are popular nowadays for learn-
ing programming. These tutoring systems provide
learning paths for accomplishing practical compe-
tences in selected programming aspects. They moti-
vate learners via rewards and document the achieved
learning progress. Only very few tutoring systems can
be identified that address software refactoring; see,
e.g., (Sandalski et al., 2011). In particular, Sandalski
et al. present an analysis assistant that provides intel-
ligent decision-support and feedback very similar to a
refactoring recommendation system, see, e.g., (Tsan-
talis et al., 2008). It identifies and highlights simple
code-smell candidates. After the user’s code modifi-
cation, it reacts by proposing (better) refactoring op-
tions.
Related are also tutoring environments that
present interactive feedback in terms of code and de-
sign visualization; for an overview, see, e.g., (Sorva
et al., 2013). Only a few of these approaches pro-
vide the reverse-engineering of UML diagrams, such
as JAVAVIS (Oechsle and Schmitt, 2002) or BlueJ
(K
¨
olling et al., 2003). However, existing approaches
do not target refactoring exercises. For instance, they
not allow for comparing the actual design (as-is) and
the targeted design (to-be), e.g., as defined by the in-
structor, especially not in terms of UML class dia-
grams. Moreover, tutoring systems barely provide in-
tegrated software-behavior evaluation in terms of re-
gression tests (pre-specified by the instructor).
We complement these approaches by presenting
an approach that integrates interactive feedback on
code modifications in terms of software-design qual-
ity and software behavior.
6.2 Approaches for Teaching
Refactoring
Besides tutoring systems (discussed above) other
kinds of teaching refactoring are related to our ap-
proach. In addition to tutoring systems based on in-
structional learning design, a few other learning ap-
proaches in terms of editor-based refactoring games
can be identified that also integrate automated feed-
back. In contrast to tutoring systems which normally
apply small examples, serious games such as (Elezi
et al., 2016; Haendler and Neumann, 2019) are based
on real-world code base. These approaches also in-
clude means for rewarding successful refactorings by
increasing the game score (or reducing the technical-
debt score) and partially provide competitive and/or
collaborative game variants. However, so far these
games do not include visual feedback that is par-
ticularly important for the training of design-related
refactoring.
Moreover other approaches without integrated au-
tomated feedback are established. For instance, Smith
et al. propose an incremental approach for teaching
different refactoring types on college level in terms of
learning lessons (Smith et al., 2006; Stoecklin et al.,
2007). The tasks are also designed in an instruc-
tional way with exemplary solutions that have to be
transferred to the current synthetic refactoring can-
didate (i.e. code smell). Within this, they provide
an exemplary learning path for refactorings. Fur-
thermore, Abid et al. conducted an experiment for
contrasting two students groups, of which one per-
formed pre-enhancement (refactoring first, then code
extension) and the second post-enhancement (code
extension first, then refactoring) (Abid et al., 2015).
An Interactive Tutoring System for Training Software Refactoring
185
Then they compared the quality of the resulting code.
L
´
opez et al. report on a study for teaching refactoring
(L
´
opez et al., 2014). They propose exemplary refac-
toring task categories and classify them according to
Bloom’s taxonomy and learning types. They also de-
scribe learning settings, which aim at simulating real-
world conditions by providing, e.g., an IDE and revi-
sion control systems.
In addition to these teaching and training ap-
proaches, we present a tutoring system that can be
seen as a link in the learning path between lectures
and lessons (that mediate basic knowledge on refac-
toring) on the one hand and environments such as seri-
ous games that already demand practical competences
on the other.
7 DISCUSSION
In this paper, a new approach has been presented
for teaching and training practical competences in
software refactoring. As shown above (Section 6),
the tutoring system is considered as complement to
other training environments and techniques, such as
lectures for mediating basic understanding (before)
and serious games (afterwards) for consolidating and
extending the practical competences in direction of
higher-level competences such as evaluating refac-
toring options and developing refactoring strategies.
The provided decision support in terms of UML de-
sign diagrams (representing the as-is and to-be soft-
ware design) especially addresses the refactoring of
quality issues on the level of software design and ar-
chitecture, which are not directly visible by review-
ing the source code alone; also see architectural debt
(Kruchten et al., 2012).
The feasibility of the proposed approach has been
illustrated in two respects. On the one hand, in terms
of the usability for training practical competences in
software refactoring via two exemplary application
scenarios, i.e. design refactoring and test-driven de-
velopment, for which both the exercise workflows and
other aspects have been described. On the other hand,
by demonstrating the technical feasibility via a proof-
of-concept implementation in Java that realizes the
core functions. In this course, also a more detailed
exercise example has been presented.
As a next step, it is necessary to investigate the ap-
propriateness of the approach in an actual training set-
ting (empirical evaluation) and gather feedback from
system users. From a didactic perspective, a chal-
lenge in creating a training unit is to put together a set
of appropriate refactoring exercises consisting of code
fragments that manifest as smells in corresponding
UML diagrams. An orientation for this can be seen
in rules and techniques for identifying and refactoring
smells on the level of software design (Suryanarayana
et al., 2014). In addition, in (Haendler, 2018) we ex-
plored how 14 kinds of software-design smells can
be identified (and represented) in UML class and se-
quence diagrams. This catalog can serve as a system-
atic guideline for creating exercises targeted on dif-
ferent design smells. For applying the prototype in a
training setting, also further technical extensions and
refinements are planned; for example, by including
views for assessment and progress management (e.g.,
gamification elements such as leaderboards or activity
charts).
8 CONCLUSION
In this paper, we presented a novel approach for teach-
ing software refactoring, especially with focus soft-
ware design issues, via an interactive development
and learning environment that provides feedback to
the user in terms of reverse-engineered UML dia-
grams, test behavior and an extensible array of quality
aspects. The paper contributes to the state-of-the art
via the following aspects; by:
presenting a training environment that supports
users, e.g., university students or (more) expe-
rienced software developers, in accomplishing
practical competences for software refactoring.
illustrating exemplary application scenarios that
show the feasibility of the approach for train-
ing practical competences in refactoring and test-
driven development.
providing a web-based software-technical im-
plementation in Java (refacTutor) as proof-of-
concept that demonstrates the technical feasibil-
ity of the proposed approach. Here also a short
exercise example is illustrated in detail.
specifying a generic concept and a lightweight ar-
chitecture that integrates already existing analy-
sis tools such as test frameworks, diagram edi-
tors (and quality analyzers), which allows one to
implement the tutoring system for other program-
ming languages as well.
As already discussed in Section 7, in the next step an
empirical evaluation of the prototype implementation
will be done. For this purpose, it is planned to inte-
grate a training unit on refactoring for design smells
into a university course on software design and mod-
eling. The key questions here are how the training via
the tutoring system is perceived by the users and to
what extent it supports in acquiring practical refactor-
ing competences.
CSEDU 2019 - 11th International Conference on Computer Supported Education
186
REFERENCES
Abid, S., Abdul Basit, H., and Arshad, N. (2015). Reflec-
tions on teaching refactoring: A tale of two projects.
In Proceedings of the 2015 ACM Conference on Inno-
vation and Technology in Computer Science Educa-
tion, pages 225–230. ACM.
Ajax.org (2019). AceEditor. https://ace.c9.io/ [March 21,
2019].
Alves, N. S., Mendes, T. S., de Mendonc¸a, M. G., Sp
´
ınola,
R. O., Shull, F., and Seaman, C. (2016). Identifica-
tion and management of technical debt: A systematic
mapping study. Information and Software Technology,
70:100–121.
Argyris, C. (1977). Double loop learning in organizations.
Harvard business review, 55(5):115–125.
Arisholm, E., Briand, L. C., Hove, S. E., and Labiche, Y.
(2006). The impact of UML documentation on soft-
ware maintenance: An experimental evaluation. IEEE
Transactions on Software Engineering, 32(6):365–
381.
Beck, K. (2003). Test-driven development: by example.
Addison-Wesley Professional.
Bloom, B. S. et al. (1956). Taxonomy of educational objec-
tives. vol. 1: Cognitive domain. New York: McKay,
pages 20–24.
Campbell, G. and Papapetrou, P. P. (2013). SonarQube in
action (In Action series). Manning Publications Co.
Ca
˜
nas, J. J., Bajo, M. T., and Gonzalvo, P. (1994). Men-
tal models and computer programming. International
Journal of Human-Computer Studies, 40(5):795–811.
CoderGears (2018). JArchitect. [March 21, 2019].
Elezi, L., Sali, S., Demeyer, S., Murgia, A., and P
´
erez, J.
(2016). A game of refactoring: Studying the impact of
gamification in software refactoring. In Proceedings
of the Scientific Workshop Proceedings of XP2016,
page 23. ACM.
Fernandes, E., Oliveira, J., Vale, G., Paiva, T., and
Figueiredo, E. (2016). A review-based comparative
study of bad smell detection tools. In Proceedings of
the 20th International Conference on Evaluation and
Assessment in Software Engineering, page 18. ACM.
Fontana, F. A., Braione, P., and Zanoni, M. (2012). Auto-
matic detection of bad smells in code: An experimen-
tal assessment. J. Object Technology, 11(2):5–1.
Fontana, F. A., Dietrich, J., Walter, B., Yamashita, A., and
Zanoni, M. (2016). Antipattern and code smell false
positives: Preliminary conceptualization and classi-
fication. In Proc. of 23rd International Conference
on Software Analysis, Evolution, and Reengineering
(SANER 2016), volume 1, pages 609–613. IEEE.
Forman, I. R. and Forman, N. (2004). Java Reflection in
Action (In Action series). Manning Publications Co.
Fowler, M., Beck, K., Brant, J., Opdyke, W., and Roberts,
D. (1999). Refactoring: improving the design of exist-
ing code. Addison-Wesley Professional.
Gamma, E., Beck, K., et al. (1999). JUnit: A cook’s tour.
Java Report, 4(5):27–38.
George, C. E. (2000). Experiences with novices: The im-
portance of graphical representations in supporting
mental models. In Proc. of 12 th Workshop of the Psy-
chology of Programming Interest Group (PPIG 2000),
pages 33–44.
Haendler, T. (2018). On using UML diagrams to identify
and assess software design smells. In Proc. of the
13th International Conference on Software Technolo-
gies, pages 413–421. SciTePress.
Haendler, T. and Frysak, J. (2018). Deconstructing
the refactoring process from a problem-solving and
decision-making perspective. In Proc. of the 13th
International Conference on Software Technologies,
pages 363–372. SciTePress.
Haendler, T. and Neumann, G. (2019). Serious refactor-
ing games. In Proc. of the 52nd Hawaii International
Conference on System Sciences, pages 7691–7700.
Haendler, T., Sobernig, S., and Strembeck, M. (2015).
Deriving tailored UML interaction models from
scenario-based runtime tests. In International Con-
ference on Software Technologies, pages 326–348.
Springer.
Haendler, T., Sobernig, S., and Strembeck, M. (2017). To-
wards triaging code-smell candidates via runtime sce-
narios and method-call dependencies. In Proceed-
ings of the XP2017 Scientific Workshops, pages 8:1–9.
ACM.
IMS Global Consortium (2003). IMS simple sequencing
best practice and implementation guide. Final specifi-
cation, March.
K
¨
olling, M., Quig, B., Patterson, A., and Rosenberg, J.
(2003). The BlueJ system and its pedagogy. Com-
puter Science Education, 13(4):249–268.
Kollmann, R., Selonen, P., Stroulia, E., Systa, T., and Zun-
dorf, A. (2002). A study on the current state of the art
in tool-supported UML-based static reverse engineer-
ing. In Ninth Working Conference on Reverse Engi-
neering, 2002. Proceedings., pages 22–32. IEEE.
Krathwohl, D. R. (2002). A revision of bloom’s taxonomy:
An overview. Theory into practice, 41(4):212–218.
Kruchten, P., Nord, R. L., and Ozkaya, I. (2012). Technical
debt: From metaphor to theory and practice. IEEE
software, 29(6):18–21.
Kruchten, P. B. (1995). The 4+1 view model of architecture.
IEEE software, 12(6):42–50.
L
´
opez, C., Alonso, J. M., Marticorena, R., and Maudes,
J. M. (2014). Design of e-activities for the learning
of code refactoring tasks. In Computers in Education
(SIIE), 2014 International Symposium on, pages 35–
40. IEEE.
Martini, A., Bosch, J., and Chaudron, M. (2014). Architec-
ture technical debt: Understanding causes and a quali-
tative model. In 2014 40th EUROMICRO Conference
on Software Engineering and Advanced Applications,
pages 85–92. IEEE.
Moha, N., Gueheneuc, Y.-G., Duchien, L., and Le Meur,
A.-F. (2010). Decor: A method for the specification
and detection of code and design smells. IEEE Trans-
actions on Software Engineering, 36(1):20–36.
An Interactive Tutoring System for Training Software Refactoring
187
Mugridge, R. (2003). Challenges in teaching test driven de-
velopment. In International Conference on Extreme
Programming and Agile Processes in Software Engi-
neering, pages 410–413. Springer.
Nord, R. L., Ozkaya, I., Kruchten, P., and Gonzalez-Rojas,
M. (2012). In search of a metric for managing archi-
tectural technical debt. In 2012 Joint Working IEEE/I-
FIP Conference on Software Architecture and Euro-
pean Conference on Software Architecture, pages 91–
100. IEEE.
Object Management Group (2015). Unified Mod-
eling Language (UML), Superstructure, Version
2.5.0. http://www.omg.org/spec/UML/2.5 [March 21,
2019].
Oechsle, R. and Schmitt, T. (2002). JavaVis: Automatic
program visualization with object and sequence dia-
grams using the java debug interface (JDI). In Soft-
ware visualization, pages 176–190. Springer.
Opdyke, W. F. (1992). Refactoring object-oriented frame-
works. University of Illinois at Urbana-Champaign
Champaign, IL, USA.
Parnas, D. L. (1994). Software aging. In Proceedings of
16th International Conference on Software Engineer-
ing, pages 279–287. IEEE.
Richner, T. and Ducasse, S. (1999). Recovering high-level
views of object-oriented applications from static and
dynamic information. In Proceedings of the IEEE
International Conference on Software Maintenance,
pages 13–22. IEEE Computer Society.
Roques, A. (2017). PlantUml: UML diagram editor.
https://plantuml.com/ [March 21, 2019].
Sandalski, M., Stoyanova-Doycheva, A., Popchev, I., and
Stoyanov, S. (2011). Development of a refactoring
learning environment. Cybernetics and Information
Technologies (CIT), 11(2).
Scanniello, G., Gravino, C., Genero, M., Cruz-Lemus, J. A.,
Tortora, G., Risi, M., and Dodero, G. (2018). Do
software models based on the UML aid in source-
code comprehensibility? Aggregating evidence from
12 controlled experiments. Empirical Software Engi-
neering, 23(5):2695–2733.
Sims, Z. and Bubinski, C. (2018). Codecademy.
http://www.codecademy.com [March 21, 2019].
Smith, S., Stoecklin, S., and Serino, C. (2006). An in-
novative approach to teaching refactoring. In ACM
SIGCSE Bulletin, volume 38, pages 349–353. ACM.
Sorva, J., Karavirta, V., and Malmi, L. (2013). A review
of generic program visualization systems for introduc-
tory programming education. ACM Transactions on
Computing Education (TOCE), 13(4):15.
Stoecklin, S., Smith, S., and Serino, C. (2007). Teaching
students to build well formed object-oriented meth-
ods through refactoring. ACM SIGCSE Bulletin,
39(1):145–149.
Suryanarayana, G., Samarthyam, G., and Sharma, T.
(2014). Refactoring for software design smells: Man-
aging technical debt. Morgan Kaufmann.
Tempero, E., Gorschek, T., and Angelis, L. (2017). Bar-
riers to refactoring. Communications of the ACM,
60(10):54–61.
Trung, N. K. (2017). InMemoryJavaCompiler.
https://github.com/trung/InMemoryJavaCompiler
[March 21, 2019].
Tsantalis, N., Chaikalis, T., and Chatzigeorgiou, A. (2008).
JDeodorant: Identification and removal of type-
checking bad smells. In 12th European Conference
on Software Maintenance and Reengineering (CSMR
2008), pages 329–331. IEEE.
CSEDU 2019 - 11th International Conference on Computer Supported Education
188