many previously undetected clones, which have been
termed ”logical clones”. This phenomenon in Java
is marginal as it amounts to 5% more clones than in
regular Java, statistically negligible in small projects.
Furthermore, the trend of software quality metrics in
the presence of clones was also studied and it emerged
that some metrics differ between the various projects.
Among the problems encountered, it should be
considered that for the detection of clones, the textual
output of the decompiler is compared, and this poses
a problem as the results will be different based on the
chosen decompiler and the configuration and version
of the decompiler itself. An example of this effect
is the presence of the dot operator for name resolu-
tion: the decompiler always imports the class using
the keyword import while some projects prefer to use
the dot operator. This created ambiguity and made it
impossible to match some methods, but at the same
time, it allowed to detection of clones that NICAD
does not consider as such because it does not differen-
tiate between the method/property access point opera-
tor and the access point operator a class in a package.
A deeper analysis should consider different decompil-
ers or work directly at the bytecode level of the JVM
to detect repeating instruction patterns. It should be
noted that at the moment there are no mature tools ca-
pable of working on Java bytecode at this level. A
possible development could be the extension of the
project to other languages. At present, NICAD sup-
ports C, C#, and Python in addition to Java. However,
the CK tool only supports Java.
REFERENCES
Alves, N. S., Mendes, T. S., de Mendonc¸a, M. G., Sp
´
ınola,
R. O., Shull, F., and Seaman, C. (2016). Identifica-
tion and management of technical debt: A systematic
mapping study. Information and Software Technology,
70:100–121.
Ardimento, P., Aversano, L., Bernardi, M. L., Cimitile, M.,
and Iammarino, M. (2021). Temporal convolutional
networks for just-in-time design smells prediction us-
ing fine-grained software metrics. Neurocomputing,
463:454–471.
Aversano, L., Bernardi, M. L., Cimitile, M., Iammarino, M.,
and Romanyuk, K. (2020a). Investigating on the rela-
tionships between design smells removals and refac-
torings. In International Conference on Software and
Data Technologies.
Aversano, L., Carpenito, U., and Iammarino, M. (2020b).
An empirical study on the evolution of design smells.
Inf., 11:348.
Aversano, L., Cerulo, L., and Palumbo, C. (2008). Mining
candidate web services from legacy code. page 37 –
40.
Chidamber, S. R. and Kemerer, C. F. (1994). A metrics
suite for object oriented design. IEEE Transactions
on Software Engineering, 20(6):476–493.
Cordy, J. R. and Roy, C. K. (2011). The nicad clone detec-
tor. In 2011 IEEE 19th International Conference on
Program Comprehension, pages 219–220. IEEE.
Davies, J., German, D. M., Godfrey, M. W., and Hindle,
A. (2013). Software bertillonage: Determining the
provenance of software development artifacts. Em-
pirical Software Engineering, 18:1195–1237.
Davis, I. J. and Godfrey, M. W. (2010). From whence it
came: Detecting source code clones by analyzing as-
sembler. In 2010 17th Working Conference on Reverse
Engineering, pages 242–246. IEEE.
German, D. M., Di Penta, M., Gueheneuc, Y.-G., and An-
toniol, G. (2009). Code siblings: Technical and legal
implications of copying code between applications. In
2009 6th IEEE International Working Conference on
Mining Software Repositories, pages 81–90. IEEE.
Iammarino, M., Zampetti, F., Aversano, L., and Di Penta,
M. (2019). Self-admitted technical debt removal and
refactoring actions: Co-occurrence or more? page 186
– 190.
Kamiya, T., Kusumoto, S., and Inoue, K. (2002). Ccfinder:
A multilinguistic token-based code clone detection
system for large scale source code. IEEE transactions
on software engineering, 28(7):654–670.
Li, Z., Avgeriou, P., and Liang, P. (2015). A systematic
mapping study on technical debt and its management.
Journal of Systems and Software, 101:193–220.
Nadim, M., Mondal, M., Roy, C. K., and Schneider, K. A.
(2022). Evaluating the performance of clone detection
tools in detecting cloned co-change candidates. Jour-
nal of Systems and Software, 187:111229.
Prechelt, L., Malpohl, G., Philippsen, M., et al. (2002).
Finding plagiarisms among a set of programs with
jplag. J. Univers. Comput. Sci., 8(11):1016.
Ragkhitwetsagul, C. and Krinke, J. (2017). Using compi-
lation/decompilation to enhance clone detection. In
2017 IEEE 11th International Workshop on Software
Clones (IWSC), pages 1–7. IEEE.
Rattan, D., Bhatia, R., and Singh, M. (2013). Software
clone detection: A systematic review. Information and
Software Technology, 55(7):1165–1199.
Saini, N., Singh, S., and Suman (2018). Code clones: De-
tection and management. Procedia Computer Science,
132:718–727. International Conference on Computa-
tional Intelligence and Data Science.
Sajnani, H., Saini, V., Svajlenko, J., Roy, C. K., and Lopes,
C. V. (2016). Sourcerercc: Scaling code clone detec-
tion to big-code. In Proceedings of the 38th Inter-
national Conference on Software Engineering, pages
1157–1168.
Selim, G. M., Foo, K. C., and Zou, Y. (2010). Enhancing
source-based clone detection using intermediate rep-
resentation. In 2010 17th working conference on re-
verse engineering, pages 227–236. IEEE.
ICSOFT 2023 - 18th International Conference on Software Technologies
352