tices. Given the variation in metric values between
early and current application versions, we believe lon-
gitudinal studies to provide valuable contributions in
this regard.
The second conclusion is that strongly dependent
metric pairs can be identified. They are the same both
in our longitudinal evaluation as well as the referred
cross-sectional one. Our longitudinal examination has
shown these relations to be extremely stable across
all application versions, including the earliest ones.
These relations proved to be impervious to the effects
of class size. Their existence should be considered
when building software quality models based on met-
ric values. They can be used to select those metrics
that best express a system property, or to avoid intro-
ducing undesired collinearity.
Our third conclusion regards the differences be-
tween the trends in metric values and dependen-
cies between studied applications. Given that cross-
sectional studies are unable to capture this, it strength-
ens the importance of longitudinal studies.
We aim to extend our research to other applica-
tion types, including mobile as well as applications
where user interface code is not dominant. Our goal
is to study whether metric thresholds indicative of
good design and development practices can be estab-
lished. Furthermore, we aim to extend our research
to applications developed using different platforms,
and study the effect of the programming language
on metric values. The main goal is to establish a
metric-based model for software quality. While such
attempts have already been undertaken, they are not
based on a solid foundation of understanding the soft-
ware development process and its outcomes, narrow-
ing their range of application.
REFERENCES
Al-Qutaish, R. E. and Ain, A. (2010). Quality Models in
Software Engineering Literature: An Analytical and
Comparative Study. Technical Report 3.
ARISA Compendium - Understandability for Reuse
(2018). http://www.arisa.se/compendium/
node39.html#property:UnderstandabilityR.
accessed November, 2018.
Arlt, S., Banerjee, I., Bertolini, C., Memon, A. M., and
Schaf, M. (2012). Grey-box gui testing: Efficient gen-
eration of event sequences. CoRR, abs/1205.4928.
Awang Abu Bakar, N. and Boughton, C. V. (2008). Using
a combination of measurement tools to extract metrics
from open source projects. In Khoshgoftaar, T., editor,
Proceeding (632) Software Engineering and Applica-
tions - 2008, pages 130–135. ACTA Press, Canada.
Bakar, N. S. A. A. and Boughton, C. V. (2012). Validation
of measurement tools to extract metrics from open
source projects. In 2012 IEEE Conference on Open
Systems, pages 1–6.
Barkmann, H., Lincke, R., and L
¨
owe, W. (2009). Quanti-
tative evaluation of software quality metrics in open-
source projects. In 2009 International Conference on
Advanced Information Networking and Applications
Workshops, pages 1067–1072.
Basili, V. R., Briand, L. C., and Melo, W. L. (1996). A
validation of object-oriented design metrics as quality
indicators. IEEE Transactions on Software Engineer-
ing, 22(10):751–761.
Bassam Al-Badareen, A., Selamat, H., Jabar, M. A., Din,
J., and Turaev, S. (2011). Software Quality Models:
A Comparative Study, volume 179. Springer, Berlin,
Heidelberg.
Chidamber, S. R., Darcy, D. P., and Kemerer, C. F. (1998).
Managerial use of metrics for object-oriented soft-
ware: an exploratory analysis. IEEE Transactions on
Software Engineering, 24(8):629–639.
Chidamber, S. R. and Kemerer, C. F. (1994). A metrics
suite for object oriented design. IEEE Transactions
on Software Engineering, 20(6):476–493.
Dandashi, F. (2002). A method for assessing the reusabil-
ity of object-oriented code using a validated set of au-
tomated measurements. In Proceedings of the 2002
ACM Symposium on Applied Computing, SAC ’02,
pages 997–1003, New York, NY, USA. ACM.
DeMarco, T. (1986). Controlling Software Projects: Man-
agement, Measurement, and Estimates. Prentice Hall
PTR, Upper Saddle River, NJ, USA.
Elish, K. O. and Alshayeb, M. (2012). Using software qual-
ity attributes to classify refactoring to patterns. Jour-
nal of Software, pages 408–419.
Emam, K. E., Benlarbi, S., Goel, N., and Rai, S. N. (2001).
The confounding effect of class size on the validity of
object-oriented metrics. IEEE Transactions on Soft-
ware Engineering, 27(7):630–650.
Gyimothy, T., Ferenc, R., and Siket, I. (2005). Empirical
validation of object-oriented metrics on open source
software for fault prediction. IEEE Transactions on
Software Engineering, 31(10):897–910.
Hitz, M. and Montazeri, B. (1995). Measuring coupling
and cohesion in object-oriented systems. In Proceed-
ings of International Symposium on Applied Corpo-
rate Computing, pages 25–27.
Kanellopoulos, Y., Antonellis, P., Antoniou, D., Makris,
C., Theodoridis, E., Tjortjis, C., and Tsirakis, N.
(2010). Code quality evaluation methodology using
the ISO/IEC 9126 standard. International Journal of
Software Engineering & Applications, 1(3):17–36.
Landman, D., Serebrenik, A., and Vinju, J. (2014). Empir-
ical analysis of the relationship between cc and sloc
in a large corpus of java methods. In Proceedings of
the 2014 IEEE International Conference on Software
Maintenance and Evolution, ICSME ’14, pages 221–
230, Washington, DC, USA. IEEE Computer Society.
Lenhard, J., Blom, M., and Herold, S. (2018). Exploring the
suitability of source code metrics for indicating archi-
tectural inconsistencies.
ENASE 2019 - 14th International Conference on Evaluation of Novel Approaches to Software Engineering
90