
ing this process enhanced the validity and reliability
of the definition, thereby increasing researchers’ trust
in the outcomes of this study.
5.2.2 Considering Public Projects
This study utilizes the public repositories sourced
from GitHub. The metrics of private repositories re-
main undisclosed because of their unavailability. This
poses a challenge in asserting the generalizability of
the study.
6 CONCLUSION
This paper presents an empirical study aiming at
open-source JavaScript code quality analysis using
code evaluation tools. We demonstrated that small
projects start with high maintainability but decline
as they grow. Beyond that, a shift to medium-scale
projects managed by experienced developers leads to
a resurgence, while larger projects experience a lim-
ited decrease. The increase in project size and expe-
rience caused issues to decrease in the project. How-
ever, while the scope of the project grows, the average
estimated error, lint error, tend to escalate. An ob-
servable increase in cyclomatic complexity, cognitive
complexity, code smell, and code duplication accom-
panies the rise in lines of code (LOC).
Our findings will help developers make better
decisions regarding the relationship between project
size, experience levels, and code metrics, promoting
improved code practices for JavaScript-oriented soft-
ware development. Additionally, by revealing the ca-
pabilities of evaluation tools, our study provides valu-
able insights for practitioners, selecting the most suit-
able tools for code evaluation practices and fostering
continuous improvement in the software industry.
REFERENCES
(1990). Ieee standard glossary of software engineering ter-
minology. IEEE Std 610.12-1990, pages 1–84.
Abdurakhimovich, U. A. (2023). The future of javascript:
Emerging trends and technologies. Formation of Psy-
chology and Pedagogy as Interdisciplinary Sciences,
2(21):12–14.
Adewumi, A., Misra, S., Omoregbe, N., Crawford, B.,
and Soto, R. (2016). A systematic literature review
of open source software quality assessment models.
SpringerPlus, 5(1):1–13.
Al Jobair., A., Mohammad., S., Maisha., Z. R., Mostafa.,
M. N., and Haque., M. J. I. (2022). An empirical study
on neophytes of stack overflow: How welcoming the
community is towards them. In Proceedings of the
17th International Conference on Evaluation of Novel
Approaches to Software Engineering - ENASE, pages
197–208. INSTICC, SciTePress.
Alfadel, M., Costa, D. E., Shihab, E., and Adams, B.
(2023). On the discoverability of npm vulnerabilities
in node. js projects. ACM Transactions on Software
Engineering and Methodology, 32(4):1–27.
ans Kunst, F. (1988). Lint, a c program checker.
Ardito, L., Coppola, R., Barbato, L., and Verga, D. (2020).
A tool-based perspective on software code maintain-
ability metrics: a systematic literature review. Scien-
tific Programming, 2020:1–26.
Barkmann, H., Lincke, R., and L
¨
owe, W. (2009). Quanti-
tative evaluation of software quality metrics in open-
source projects. In 2009 International Conference on
Advanced Information Networking and Applications
Workshops, pages 1067–1072. IEEE.
Belachew, E. B., Gobena, F. A., and Nigatu, S. T. (2018).
Analysis of software quality using software metrics.
International Journal of Computational Science &
Application, 8.
Blincoe, K., Sheoran, J., Goggins, S., Petakovic, E.,
and Damian, D. (2016). Understanding the popu-
lar users: Following, affiliation influence and leader-
ship on github. Information and Software Technology,
70:30–39.
Borg, M., Tornhill, A., and Mones, E. (2023). U owns the
code that changes and how marginal owners resolve
issues slower in low-quality source code. In Proceed-
ings of the 27th International Conference on Evalua-
tion and Assessment in Software Engineering, pages
368–377.
Brito, H., Gomes, A., Santos,
´
A., and Bernardino, J. (2018).
Javascript in mobile applications: React native vs
ionic vs nativescript vs native development. In 2018
13th Iberian conference on information systems and
technologies (CISTI), pages 1–6. IEEE.
Campbell, G. A. (2018). Cognitive complexity: An
overview and evaluation. In Proceedings of the 2018
international conference on technical debt, pages 57–
58.
Chatzimparmpas, A., Bibi, S., Zozas, I., and Kerren, A.
(2019). Analyzing the evolution of javascript appli-
cations. In ENASE, pages 359–366.
Chren, S., Mac
´
ak, M., Rossi, B., and Buhnova, B. (2022).
Evaluating code improvements in software quality
course projects. In Proceedings of the 26th Interna-
tional Conference on Evaluation and Assessment in
Software Engineering, pages 160–169.
Ferenc, R., Heged
˝
us, P., and Gyim
´
othy, T. (2013). Software
product quality models. In Evolving software systems,
pages 65–100. Springer.
Higo, Y., Saitoh, A., Yamada, G., Miyake, T., Kusumoto,
S., and Inoue, K. (2011). A pluggable tool for mea-
suring software metrics from source code. In 2011
Joint Conference of the 21st International Workshop
on Software Measurement and the 6th International
Conference on Software Process and Product Mea-
surement, pages 3–12. IEEE.
Decoding Code Quality: A Software Metric Analysis of Open-Source JavaScript Projects
73