effective defect proneness prediction ability and that
such ability is deeply influenced by the project ma-
turity. As future work, the extension effort is two-
fold: improve the set of metrics considered as featu-
res — social network metrics impact will be evaluated
by considering degree, betweenness, and connected-
ness of committers in the developer’s social network
working on source code artifacts; enforce the empiri-
cal validation — this will be obtained by adding new
software projects of different domain and with diffe-
rent structural characteristics.
REFERENCES
Basili, V. R., Briand, L. C., and Melo, W. L. (1996). A
validation of object-oriented design metrics as quality
indicators. IEEE Transactions on Software Engineer-
ing, 22(10):751–761.
Bird, C., Nagappan, N., Murphy, B., Gall, H., and Devanbu,
P. T. (2011). Don’t touch my code!: examining the
effects of ownership on software quality. In SIGS-
OFT/FSE’11 19th ACM SIGSOFT Symposium on the
Foundations of Software Engineering (FSE-19) and
ESEC’11: 13rd European Software Engineering Con-
ference (ESEC-13), Szeged, Hungary, September 5-9,
2011, pages 4–14. ACM.
Boucher, A. and Badri, M. (2016). Using software me-
trics thresholds to predict fault-prone classes in object-
oriented software. In 2016 4th Intl Conf on Applied
Computing and Information Technology/3rd Intl Conf
on Computational Science/Intelligence and Applied
Informatics/1st Intl Conf on Big Data, Cloud Com-
puting, Data Science Engineering (ACIT-CSII-BCD),
pages 169–176.
Briand, L. C., Wst, J., Daly, J. W., and Porter, D. V. (2000).
Exploring the relationships between design measures
and software quality in object-oriented systems. Jour-
nal of Systems and Software, 51(3):245 – 273.
Chidamber, S. R. and Kemerer, C. F. (1994). A metrics
suite for object oriented design. IEEE Transactions
on Software Engineering, 20(6):476–493.
Fischer, M., Pinzger, M., and Gall, H. (2003). Populating a
release history database from version control and bug
tracking systems. In 19th International Conference
on Software Maintenance (ICSM 2003), The Architec-
ture of Existing Systems, 22-26 September 2003, Am-
sterdam, The Netherlands, pages 23–. IEEE Computer
Society.
Gyimothy, T., Ferenc, R., and Siket, I. (2005). Empirical
validation of object-oriented metrics on open source
software for fault prediction. IEEE Transactions on
Software Engineering, 31(10):897–910.
Hassan, A. E. (2009). Predicting faults using the complexity
of code changes. In Proceedings of the 31st Internati-
onal Conference on Software Engineering, ICSE ’09,
pages 78–88, Washington, DC, USA. IEEE Computer
Society.
Isong, B., Ifeoma, O., and Mbodila, M. (2016). Sup-
plementing object-oriented software change impact
analysis with fault-proneness prediction. In 2016
IEEE/ACIS 15th International Conference on Compu-
ter and Information Science (ICIS), pages 1–8.
Kanmani, S., Uthariaraj, V. R., Sankaranarayanan, V., and
Thambidurai, P. (2007). Object-oriented software
fault prediction using neural networks. Inf. Softw.
Technol., 49(5):483–492.
Kapila, H. and Singh, S. (2013). Article: Analysis of ck
metrics to predict software fault-proneness using bay-
esian inference. International Journal of Computer
Applications, 74(2):1–4. Full text available.
Kaur, A. and Kaur, I. (2018). An empirical evaluation of
classification algorithms for fault prediction in open
source projects. Journal of King Saud University -
Computer and Information Sciences, 30(1):2 – 17.
Kim, S., Whitehead, E. J., and Zhang, Y. (2008). Classi-
fying software changes: Clean or buggy? IEEE Trans.
Software Eng., 34(2):181–196.
Lessmann, S., Baesens, B., Mues, C., and Pietsch, S.
(2008). Benchmarking classification models for soft-
ware defect prediction: A proposed framework and
novel findings. IEEE Trans. Softw. Eng., 34(4):485–
496.
Moser, R., Pedrycz, W., and Succi, G. (2008). A compa-
rative analysis of the efficiency of change metrics and
static code attributes for defect prediction. In Procee-
dings of the 30th International Conference on Soft-
ware Engineering, ICSE ’08, pages 181–190, New
York, NY, USA. ACM.
Nagappan, N., Williams, L., Vouk, M., and Osborne, J.
(2005). Early estimation of software quality using
in-process testing metrics: A controlled case study.
SIGSOFT Softw. Eng. Notes, 30(4):1–7.
Nagappan, N., Zeller, A., Zimmermann, T., Herzig, K., and
Murphy, B. (2010). Change bursts as defect predic-
tors. In Proceedings of the 2010 IEEE 21st Internati-
onal Symposium on Software Reliability Engineering,
ISSRE ’10, pages 309–318, Washington, DC, USA.
IEEE Computer Society.
Olague, H. M., Etzkorn, L. H., Gholston, S., and Quatt-
lebaum, S. (2007). Empirical validation of three
software metrics suites to predict fault-proneness of
object-oriented classes developed using highly itera-
tive or agile software development processes. IEEE
Transactions on Software Engineering, 33(6):402–
419.
Radjenovi
´
c, D., Heri
ˇ
cko, M., Torkar, R., and
ˇ
Zivkovi
ˇ
c, A.
(2013). Software fault prediction metrics. Inf. Softw.
Technol., 55(8):1397–1418.
Rodriguez, J. D., Perez, A., and Lozano, J. A. (2010). Sen-
sitivity analysis of k-fold cross validation in prediction
error estimation. IEEE Transactions on Pattern Ana-
lysis and Machine Intelligence, 32(3):569–575.
Spinellis, D. (2005). Tool writing: a forgotten art? (soft-
ware tools). IEEE Software, 22(4):9–11.
A Multi-source Machine Learning Approach to Predict Defect Prone Components
279