Understanding Class-level Testability Through Dynamic Analysis

Amjed Tahir, Stephen G. MacDonell, Jim Buchan


It is generally acknowledged that software testing is both challenging and time-consuming. Understanding the factors that may positively or negatively affect testing effort will point to possibilities for reducing this effort. Consequently there is a significant body of research that has investigated relationships between static code properties and testability. The work reported in this paper complements this body of research by providing an empirical evaluation of the degree of association between runtime properties and class-level testability in object-oriented (OO) systems. The motivation for the use of dynamic code properties comes from the success of such metrics in providing a more complete insight into the multiple dimensions of software quality. In particular, we investigate the potential relationships between the runtime characteristics of production code, represented by Dynamic Coupling and Key Classes, and internal class-level testability. Testability of a class is considered here at the level of unit tests and two different measures are used to characterise those unit tests. The selected measures relate to test scope and structure: one is intended to measure the unit test size, represented by test lines of code, and the other is designed to reflect the intended design, represented by the number of test cases. In this research we found that Dynamic Coupling and Key Classes have significant correlations with class-level testability measures. We therefore suggest that these properties could be used as indicators of class-level testability. These results enhance our current knowledge and should help researchers in the area to build on previous results regarding factors believed to be related to testability and testing. Our results should also benefit practitioners in future class testability planning and maintenance activities.


  1. Adams, B., De Schutter, K., Zaidman, A., Demeyer, S., Tromp, H. & De Meuter, W. 2009. Using aspect orientation in legacy environments for reverse engineering using dynamic analysis--an industrial experience report. Journal of Systems and Software, 82, 668-684.
  2. Al Dallal, J. 2013. Object-oriented class maintainability prediction using internal quality attributes. Information and Software Technology, 55, 2028-2048.
  3. Arisholm, E., Briand, L. C. & Foyen, A. 2004. Dynamic coupling measurement for object-oriented software. IEEE Transactions on Software Engineering, 30, 491- 506.
  4. Badri, L., Badri, M. & Toure, F. 2011. An empirical analysis of lack of cohesion metrics for predicting testability of classes. International Journal of Software Engineering and Its Applications, 5, 69-86.
  5. Basili, V. R., Briand, L. C. & Melo, W. L. 1996. A validation of object-oriented design metrics as quality indicators. IEEE Transactions on Software Engineering, 22, 751-761.
  6. Basili, V. R. & Weiss, D. M. 1984. A methodology for collecting valid software engineering data. IEEE Transactions on Software Engineering, 10, 728-738.
  7. Bertolino, A. 2007. Software testing research: Achievements, challenges, dreams. Future of Software Engineering. IEEE Computer Society.
  8. Binder, R. V. 1994. Design for testability in objectoriented systems. Communications of the ACM, 37, 87-101.
  9. Briand, L. C., Morasca, S. & Basili, V. R. 2002. An operational process for goal-driven definition of measures. IEEE Transactions on Software Engineering, 28, 1106-1125.
  10. Bruntink, M. & Van Deursen, A. 2006. An empirical study into class testability. Journal of Systems and Software, 79, 1219-1232.
  11. Cai, Y. 2008. Assessing the effectiveness of software modularization techniques through the dynamics of software evolution. 3rd Workshop on Assessment of COntemporary Modularization Techniques. Orlando, US.
  12. Cazzola, W. & Marchetto, A. 2008. Aop-hiddenmetrics: Separation, extensibility and adaptability in sw measurement. Journal of Object Technology, 7, 53- 68.
  13. Chaumun, M. A., Kabaili, H., Keller, R. K., Lustman, F. & Saint-Denis, G. 2000. Design properties and objectoriented software changeability. European Conference on Software Maintenance and Reengineering. IEEE Computer Society.
  14. Chidamber, S. R. & Kemerer, C. F. 1994. A metrics suite for object oriented design. IEEE Transactions on Software Engineering, 20, 476-493.
  15. Cohen, J. 1988. Statistical power analysis for the behavioral sciences, L. Erlbaum Associates.
  16. Daniel, W. W. 2000. Applied nonparametric statistics, Boston MA, USA, KENT Publishing Company.
  17. Dufour, B., Driesen, K., Hendren, L. & Verbrugge, C. 2003. Dynamic metrics for java. 18th Annual ACM SIGPLAN Conference on Object-oriented Programing, Systems, Languages, and Applications. Anaheim, California, USA: ACM.
  18. Gao, J. Z., Jacob, H.-S. & Wu, Y. 2003. Testing and quality assurance for component-based software, Norwood, MA, USA, Artech House Publishers.
  19. Gunnalan, R., Shereshevsky, M. & Ammar, H. H. 2005. Pseudo dynamic metrics [software metrics]. International Conference on Computer Systems and Applications. IEEE Computer Society.
  20. Iso 2001. Software engineering - product quality-part 1. Quality model Geneva: International Organization for Standardization.
  21. Mouchawrab, S., Briand, L. C. & Labiche, Y. 2005. A measurement framework for object-oriented software testability. Information and Software Technology, 47, 979-997.
  22. Offutt, J., Abdurazik, A. & Schach, S. 2008. Quantitatively measuring object-oriented couplings. Software Quality Journal, 16, 489-512.
  23. Rompaey, B. V. & Demeyer, S. 2009. Establishing traceability links between unit test cases and units under test. European Conference on Software Maintenance and Reengineering. IEEE Computer Society.
  24. Scotto, M., Sillitti, A., Succi, G. & Vernazza, T. 2006. A non-invasive approach to product metrics collection. Journal of Systems Architecture, 52, 668-675.
  25. Tahir, A., Ahmad, R. & Kasirun, Z. M. 2010. Maintainability dynamic metrics data collection based on aspect-oriented technology. Malaysian Journal of Computer Science, 23, 177-194.
  26. Tahir, A. & Macdonell, S. G. A systematic mapping study on dynamic metrics and software quality. 28th International Conference on Software Maintenance, 2012. 2473587: IEEE Computer Society, 326-335.
  27. Traon, Y. L. & Robach, C. 1995. From hardware to software testability. International Test Conference on Driving Down the Cost of Test. IEEE Computer Society.
  28. Zaidman, A. & Demeyer, S. 2008. Automatic identification of key classes in a software system using webmining techniques. Journal of Software Maintenance and Evolution, 20, 387-417.
  29. Zhao, L. & Elbaum, S. 2000. A survey on quality related activities in open source. SIGSOFT Software Engineering Notes, 25, 54-57.

Paper Citation

in Harvard Style

Tahir A., G. MacDonell S. and Buchan J. (2014). Understanding Class-level Testability Through Dynamic Analysis . In Proceedings of the 9th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE, ISBN 978-989-758-030-7, pages 38-47. DOI: 10.5220/0004883400380047

in Bibtex Style

author={Amjed Tahir and Stephen G. MacDonell and Jim Buchan},
title={Understanding Class-level Testability Through Dynamic Analysis},
booktitle={Proceedings of the 9th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE,},

in EndNote Style

JO - Proceedings of the 9th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE,
TI - Understanding Class-level Testability Through Dynamic Analysis
SN - 978-989-758-030-7
AU - Tahir A.
AU - G. MacDonell S.
AU - Buchan J.
PY - 2014
SP - 38
EP - 47
DO - 10.5220/0004883400380047