eralization of the obtained results. Only five open-
source projects were considered for evaluation, writ-
ten in the same programming language (Java) and
considering a single version. However, we performed
cross-project validation, for each prediction model we
used four other projects to validate it.
6 CONCLUSION AND FUTURE
WORK
Reliability of a system is investigated in this paper
via bugs and changes. Our approach exploits a neural
network model to predict reliability considering two
relevant aspects: post-release defects and changes ap-
plied during the software development life cycle. The
CK metrics are used as independent variables in the
prediction model.
Five open-source projects are used to design the
experiments, two major perspectives being explored,
both using cross-project experiments: to identify the
optimum weight values for bugs and changes and to
discover the proper project used for training.
The results show that for both cross-project exper-
iments, the best accuracy is obtained for the models
with the highest weights for the bugs, thus 75B25C
and that the appropriate project to be used as training
is the PDE project.
As one of our future work, we aim to extend the
proposed model for reliability prediction and to bet-
ter emphasize its applicability through more case-
studies. At the same time, further investigation on
how to empirically determine the metric weights will
be considered.
REFERENCES
Carleton, A. D., Harper, E., Menzies, T., Xie, T., Eldh, S.,
and Lyu, M. R. (2020). The ai effect: Working at the
intersection of ai and se. IEEE Software, 37(4):26–35.
Chidamber, S. and Kemerer, C. F. (1994). A metrics suite
for object oriented design. IEEE Transactions on Soft-
ware Engineering, 20(6):476–493.
Chitra, S., Thiagarajan, K., and Rajaram, M. (2008). Data
collection and analysis for the reliability prediction
and estimation of a safety critical system using airs. In
2008 International Conference on Computing, Com-
munication and Networking, pages 1–7.
D’Ambros, M., Lanza, M., and Robbes, R. (2010). An ex-
tensive comparison of bug prediction approaches. In
2010 7th IEEE Working Conference on Mining Soft-
ware Repositories (MSR 2010), pages 31–41.
Derrac, J., Garcia, S., Molina, D., and Herrera, F. (2011). A
practical tutorial on the use of nonparametric statisti-
cal tests as a methodology for comparing evolutionary
and swarm intelligence algorithms. Swarm and Evo-
lutionary Computation, 1:3–18.
Geremia, S. and Tamburri, D. A. (2018). Varying de-
fect prediction approaches during project evolution:
A preliminary investigation. In 2018 IEEE Workshop
on Machine Learning Techniques for Software Quality
Evaluation (MaLTeSQuE), pages 1–6.
Kitchenham, B., Pfleeger, S. L., and Fenton, N. E. (1995).
Towards a framework for software measurement vali-
dation. IEEE Transactions on Software Engineering,
21(12):929–944.
Li, W. (1998). Another metric suite for object-oriented
programming. Journal of Systems and Software,
44(2):155 – 162.
Li, X., Mutha, C., and Smidts, C. S. (2016). An automated
software reliability prediction system for safety crit-
ical software. Empirical Softw. Engg., 21(6):2413–
2455.
Lou, J., Jiang, Y., Shen, Q., Shen, Z., Wang, Z., and Wang,
R. (2016). Software reliability prediction via rele-
vance vector regression. Neurocomput., 186(C):66–
73.
Mahmood, Z., Bowes, D., Lane, P. C., and Hall, T. (2015).
What is the impact of imbalance on software defect
prediction performance? In Proceedings of the 11th
International Conference on Predictive Models and
Data Analytics in Software Engineering, pages 1–4.
Marinescu, R. (2002). Measurement and Quality in Object
Oriented Design. PhD Thesis, Faculty of Automatics
and Computer Science, University of Timisoara.
Merseguer, J. (2003). Software Performance Engineering
based on UML and Petri nets. PhD thesis, University
of Zaragoza, Spain.
Moser, R., Pedrycz, W., and Succi, G. (2008). A com-
parative analysis of the efficiency of change metrics
and static code attributes for defect prediction. In
2008 ACM/IEEE 30th International Conference on
Software Engineering, pages 181–190.
Nayrolles, M. and Hamou-Lhadj, A. (2018). Clever: Com-
bining code metrics with clone detection for just-in-
time fault prevention and resolution in large indus-
trial projects. In Proceedings of the 15th Interna-
tional Conference on Mining Software Repositories,
page 153–164. Association for Computing Machin-
ery.
Russel, S. and Norvig, P. (1995). Artificial intelligence: a
modern approach. Englewood Cliffs, N.J. : Prentice
Hall.
Schneidewind, N. F. (1997). Reliability modeling for
safety-critical software. IEEE Transactions on Reli-
ability, 46(1):88–98.
Shrikanth, N., Majumder, S., and Menzies, T. (2021). Early
life cycle software defect prediction. why? how? In
2021 IEEE/ACM 43rd International Conference on
Software Engineering (ICSE), pages 448–459.
Tang, M.-H., Kao, M.-H., and Chen, M.-H. (1999). An em-
pirical study on object-oriented metrics. In Proceed-
ings Sixth International Software Metrics Symposium
(Cat. No.PR00403), pages 242–249.
Towards a Neural Network based Reliability Prediction Model via Bugs and Changes
309