Detection of Software Anomalies Using Object-oriented Metrics

Renato Correa Juliano, Bruno A. N. Travençolo, Michel S. Soares

Abstract

The development of quality software has always been the aim of many studies in past years, in which the focus was on seeking for better software production with high effectiveness and quality. In order to evaluate software quality, software metrics were proposed, providing an effective tool to analyze important features such as maintainability, reusability and testability. The Chidamber and Kemerer metrics (CK metrics) are frequently applied to analyze Object-Oriented Programming (OOP) features related to structure, inheritance and message calls. The main purpose of this article is to gather results from studies that used the CK metrics for source code evaluation, and based on the CK metrics, perform a review related to software metrics and the values obtained. Results on the mean and standard deviation obtained in all the studied papers is presented, both for Java and C++ projects. Therefore, software anomalies are identified comparing the results of software metrics described in those studies. This article contributes by suggesting values for software metrics that, according to the literature, can present high probabilities of failures. Another contribution is to analyze which CK metrics are successfully used (or not) in some activities such as to predict proneness error, analyze the impact of refactoring on metrics and examine the facility of white-box reuse based on metrics. We discovered that, in most of the studied articles, CBO, RFC and WMC are often useful and hierarchical metrics as DIT and NOC are not useful in the implementation of such activities. The results of this paper can be used to guide software development, helping to manage the development and preventing future problems.

References

  1. Abuasad, A. and Alsmadi, I. (2012). Evaluating the Correlation between Software Defect and Design Coupling Metrics. In International Conference on Computer, Information and Telecommunication Systems (CITS), pages 1-5.
  2. Bar-Yam, Y. (2003). When Systems Engineering Fails - Toward Complex Systems Engineering. In Proceedings of the International Conference on Systems, Man & Cybernetics, volume 2, pages 2012-2028.
  3. Benestad, H., Anda, B., and Arisholm, E. (2006). Assessing Software Product Maintainability Based on ClassLevel Structural Measures. In Product-Focused Software Process Improvement, volume 4034 of Lecture Notes in Computer Science, pages 94-111. Springer.
  4. Berry, D. M. (2004). The Inevitable Pain of Software Development: Why There Is No Silver Bullet. In Radical Innovations of Software and Systems Engineering in the Future, Lecture Notes in Computer Science, pages 50-74.
  5. Boehm, B. W. (2006). A View of 20th and 21st Century Software Engineering. In ICSE 7806: Proceedings of the 28th International Conference on Software Engineering, pages 12-29.
  6. Charette, R. N. (2005). Why Software Fails. IEEE Spectrum,, 42(9):42-49.
  7. Chidamber, S. R. and Kemerer, C. F. (1994). A Metrics Suite for Object Oriented Design. IEEE Transactions on Software Engineering, 20(6):476-493.
  8. Dallal, J. A. (2012). Constructing Models for Predicting Extract Subclass Refactoring Opportunities using Object-Oriented Quality Metrics. Information and Software Technology, 54(10):1125-1141.
  9. English, M., Exton, C., Rigon, I., Brendan, and Cleary (2009). Fault Detection and Prediction in an Opensource Software Project. In Proceedings of the 5th International Conference on Predictor Models in Software Engineering, PROMISE 7809, pages 17:1-17:11, New York, NY, USA. ACM.
  10. Fenton, N. E. and Pfleeger, S. L. (1998). Software Metrics: A Rigorous and Practical Approach. PWS Publishing Co., Boston, MA, USA, 2nd edition.
  11. Glass, R. L. (1999). The Realities of Software Technology Payoffs. Communications of the ACM,, 42(2):74-79.
  12. Gyimothy, T., Ferenc, R., and Siket, I. (2005). Empirical Validation of Object-Oriented Metrics on Open Source Software for Fault Prediction. IEEE Transactions on Software Engineering, 31(10):897-910.
  13. Harrison, R., Counsell, S., and Nithi, R. V. (1998). An Investigation into the Applicability and Validity of Object-Oriented Design Metrics. Empirical Software Engineering, 3(3):255-273.
  14. Huck, S. W. (2012). Reading Statistics and Research. Pearson, Boston, MA, USA.
  15. Janes, A., Scotto, M., Pedrycz, W., Russo, B., Stefanovic, M., and Succi, G. (2006). Identification of DefectProne Classes in Telecommunication Software Systems Using Design Metrics. Information Sciences, 176(24):3711 - 3734.
  16. Johari, K. and Kaur, A. (2012). Validation of Object Oriented Metrics Using Open Source Software System: An Empirical Study. SIGSOFT Software Engineering Notes, 37(1):1-4.
  17. Kakarontzas, G., Constantinou, E., Ampatzoglou, A., and Stamelos, I. (2012). Layer assessment of objectoriented software: A metric facilitating white-box reuse. Journal of Systems and Software, 86:349-366.
  18. Kitchenham, B. (2010). Whats up with Software Metrics? A Preliminary Mapping Study. Journal of Systems and Software, 83(1):37-51.
  19. Kocaguneli, E., Gay, G., Menzies, T., Yang, Y., and Keung, J. W. (2010). When to Use Data from Other Projects for Effort Estimation. In Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, ASE 7810, pages 321-324, New York, NY, USA. ACM.
  20. Lanza, M. (2003). CodeCrawler - Lessons Learned in Building a Software Visualization Tool. In In Proceedings of CSMR 2003, pages 409-418. IEEE Press.
  21. Lanza, M. and Marinescu, R. (2006). Object Oriented Metrics in Practice. Springer, Berlin.
  22. Lorenz, M. and Kidd, J. (1994). Object-Oriented Software Metrics: A Practical Guide. Prentice-Hall, Inc., Upper Saddle River, NJ, USA.
  23. Menzies, T., Butcher, A., Marcus, A., Zimmermann, T., and Cok, D. (2011). Local vs. Global Models for Effort Estimation and Defect Prediction. In Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering, ASE 7811, pages 343-351, Washington, DC, USA. IEEE Computer Society.
  24. Moser, R., Sillitti, A., Abrahamsson, P., and Succi, G. (2006). Does Refactoring Improve Reusability? In Morisio, M., editor, Reuse of Off-the-Shelf Components, volume 4039 of Lecture Notes in Computer Science, pages 287-297. Springer Berlin Heidelberg.
  25. Nair, T. G. and Selvarani, R. (2011). Defect Proneness Estimation and Feedback Approach for Software Design Quality Improvement. Information and Software Technology, 54(3):274-285.
  26. Olague, H., Etzkorn, L., Gholston, S., and Quattlebaum, S. (2007). Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of ObjectOriented Classes Developed Using Highly Iterative or Agile Software Development Processes. IEEE Transactions on Software Engineering, 33(6):402-419.
  27. Olbrich, S., Cruzes, D. S., Basili, V., and Zazworka, N. (2009). The Evolution and Impact of Code Smells: A Case Study of Two Open Source Systems. In Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement, ESEM 7809, pages 390-400.
  28. Radjenovi, D., Heriko, M., Torkar, R., and Zivkovic, A. (2013). Software Fault Prediction Metrics: A Systematic Literature Review. Information and Software Technology, 55(8):1397-1418.
  29. Shatnawi, R. (2010). A quantitative investigation of the acceptable risk levels of object-oriented metrics in opensource systems. IEEE Transactions on Software Engineering, 36(2):216-225.
  30. Shatnawi, R. and Li, W. (2008). The Effectiveness of Software Metrics in Identifying Error-Prone Classes in Post-Release Software Evolution Process. Journal of Systems and Software, 81(11):1868-1882.
  31. Singh, S. and Kahlon, K. (2011). Effectiveness of Encapsulation and Object-Oriented Metrics to Refactor Code and Identify Error Prone Classes using Bad Smells. SIGSOFT Software Engineering Notes, 36(5):1-10.
  32. Singh, S. and Kahlon, K. S. (2012). Effectiveness of Refactoring Metrics Model to Identify Smelly and Error Prone Classes in Open Source Software. SIGSOFT Software Engineering Notes, 37(2):1-11.
  33. Stroggylos, K. and Spinellis, D. (2007). Refactoring - Does It Improve Software Quality? In Fifth International Workshop on Software Quality, 2007. WoSQ'07.
  34. Subramanyam, R. and Krishnan, M. S. (2003). Empirical Analysis of CK Metrics for Object-Oriented Design Complexity: Implications for Software Defects. IEEE Transactions on Software Engineering, 29(4):297- 310.
  35. Wettel, R. and Lanza, M. (2007). Visualizing Software Systems as Cities. In 4th IEEE International Workshop on Visualizing Software for Understanding and Analysis, 2007. VISSOFT 2007., pages 92-99.
  36. Wirth, N. (2008). A Brief History of Software Engineering. IEEE Annals of the History of Computing,, 30(3):32- 39.
  37. Zhou, Y. and Leung, H. (2006). Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults. IEEE Transactions on Software Engineering, 32(10):771-789.
  38. Zhoua, Y., Xua, B., and Leung, H. (2010). On the Ability of Complexity Metrics to Predict Fault-Prone Classes in Object-Oriented Systems. Journal of Systems and Software, 83(4):660-674.
Download


Paper Citation


in Harvard Style

Correa Juliano R., A. N. Travençolo B. and S. Soares M. (2014). Detection of Software Anomalies Using Object-oriented Metrics . In Proceedings of the 16th International Conference on Enterprise Information Systems - Volume 2: ICEIS, ISBN 978-989-758-028-4, pages 241-248. DOI: 10.5220/0004889102410248


in Bibtex Style

@conference{iceis14,
author={Renato Correa Juliano and Bruno A. N. Travençolo and Michel S. Soares},
title={Detection of Software Anomalies Using Object-oriented Metrics},
booktitle={Proceedings of the 16th International Conference on Enterprise Information Systems - Volume 2: ICEIS,},
year={2014},
pages={241-248},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004889102410248},
isbn={978-989-758-028-4},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 16th International Conference on Enterprise Information Systems - Volume 2: ICEIS,
TI - Detection of Software Anomalies Using Object-oriented Metrics
SN - 978-989-758-028-4
AU - Correa Juliano R.
AU - A. N. Travençolo B.
AU - S. Soares M.
PY - 2014
SP - 241
EP - 248
DO - 10.5220/0004889102410248