Impact-driven Regression Test Selection for Mainframes

Abhishek Dharmapurikar, Benjamin J. R. Wierwille, Jayashree Ramanthan, Rajiv Ramnath

2013

Abstract

Software testing is particularly expensive in the case of legacy systems such as mainframes. Critical to many large enterprises, these systems are perpetually in maintenance where even small changes usually lead to an end-to-end regression test. This is called the “retest-all” approach and is done to ensure confidence in the functioning of the system, but this approach is impractical primarily due to resource needs and user stories generated within the agile system that require rapid changes to the system. This research is aimed at reducing the required regression testing and its costs associated with the system. The improvements are achieved by identifying only those tests needed by assets changed and others that are ‘impacted’. The impact analysis leverages the availability of modern static code analysis tools and dedicated test environments for mainframes. By using our proposed impact technique on a real-world mainframe application, the test savings can be about 34%.

References

  1. R. Pressman. Software Engineering: A Practitioner's Approach. McGraw-Hill, New York, 2002.
  2. Leung H., White, L.: Insights into regression testing. In Proceedings of the Conference on Software Maintenance, pages 60-69, 1989.
  3. Weiderman, Nelson H., Bergey, John K., Smith, Dennis B., & Tilley, Scott R.: Approaches to Legacy System Evolution. In (CMU/SEI-97-TR-014) Pittsburgh, Pa. Software Engineering Institute, Carnegie Mellon University, 1997.
  4. G. Rothermel and M. Harrold. Analyzing regression test selection techniques. In IEEE Transactions on Software Engineering, pages 529-551, August 1996.
  5. G. Rothermel, R. H. Untch, and M. J. Harrold: Prioritizing test cases for regression testing, In IEEE Trans. on Software Eng. vol. 27, No. 10, pages. 929-948, Oct. 2001.
  6. M. J. Harrold, R. Gupta, and M. L. Soffa: A methodology for controlling the size of test suite. In ACM Trans. on Software Eng. and Methodology (TOSEM), NY USA, pages. 270-285, 1993.
  7. M. Hennessy, J. F. Power: An analysis of rule coverage as a criterion in generating minimal test suites for grammar based software. In Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering (ASE'05), Long Beach, CA, USA, pages 104-113, November 2005.
  8. Kandel, P. Saraph, and M. Last: Test cases generation and reduction by automated inputoutput analysis. In Proceedings of 2003 IEEE International Conference on Systems, Man and Cybernetics (ICSMC'3), Washington, D.C., pages 768-773 vol.1 October, 2003.
  9. Vaysburg, L. H. Tahat, and B. Korel: Dependence analysis in reduction of requirement based test suites. In Proceedings of the 2002 ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA'02), Roma Italy, pages 107-111, 2002.
  10. J. A. Jones, and M. J. Harrold: Test-suite reduction and prioritization for modified condition/decision coverage. In IEEE Trans. on Software Engineering (TSE'03), Vol. 29, No. 3, pages 195-209, March 2003.
  11. D. Jeffrey, and N. Gupta: Test suite reduction with selective redundancy. In Proceedings of the 21st IEEE International Conference on Software Maintenance (ICSM'05), Budapest, Hungary, pages 549-558, September 2005.
  12. S. R. Khan, I. Rahman, S. R. Malik: The Impact of Test Case Reduction and Prioritization on Software Testing Effectiveness. In International Conference on Emerging Technologies, pages 416- 421, October 2009.
  13. G. Rothermel and M. Harrold: Selecting tests and identifying test coverage requirements for modified software. In Proceedings of the International Symposium on Software Testing and Analysis, pages 169-184 August 1994.
  14. Y. Chen, R. Probert, and D. Sims.: Specification based regression test selection with risk analysis. In CASCON 7802 Proceedings of the 2002 conference of the Centre for Advanced Studies on Collaborative research, page 1, 2002.
  15. P. Chittimalli and M. Harrold: Regression test selection on system requirements. In ISEC 7808 Proceedings of the 1st conference on India software engineering conference, pages 87- 96, February 2008.
  16. S. Biswas, R. Mall, M. Satpathy and S. Sukumaran. Regression Test Selection Techniques: A Survey. In Informatica. 35(3):289-321, October 2011.
  17. M. Harrold and M. Soffa: An incremental approach o unit testing during maintenance. In Proceedings of the International Conference on Software Maintenance, pages 362-367, October 1988.
  18. Taha, S. Thebaut, and S. Liu: An approach tosoftware fault localization and revalidation based on incremental data flow analysis. In Proceedings of the 13th Annual International Computer Software and Applications Conference, pages 527-534, September 1989.
  19. F. Vokolos and P. Frankl. Pythia: A regression test selection tool based on textual differencing. In Proceedings of the 3rd International Conference on Reliability, Quality & Safety of Software-Intensive Systems (ENCRESS' 97), pages 3-21, May 1997.
  20. J. Ferrante, K. Ottenstein, and J. Warren: The program dependence graph and its use in optimization. ACM Transactions on Programming Languages and Systems, pages 9(3):319-349, July 1987.
  21. J. Laski and W. Szermer: Identification of program modifications and its applications in software maintenance. In Proceedings of the Conference on Software Maintenance, pages 282-290, November 1992.
  22. G. Rothermel and M. Harrold: A safe, efficient regression test selection technique. ACM Transactions on Software Engineering and Methodology, 6(2):173-210, April 1997.
  23. H. Agrawal, J. Horgan, E. Krauser, and S. London: Incremental regression testing. In IEEE International Conference on Software Maintenance, pages 348-357, 1993.
  24. H. Leung and L. White: A study of integration testing and software regression at the integration level. In Proceedings of the Conference on Software Maintenance, pages 290-300, November 1990.
  25. G. Rothermel: Efficient, Effective Regression Testing Using Safe Test Selection Techniques. PhD dissertation, Clemson Univ., May 1996.
  26. Norman Wilde: Understanding Program Dependencies. In SEI-CM. August 1990.
  27. S. Yoo, M. Harman: Regression testing minimization, selection and prioritization: a survey. In Journal of Software Testing, Verification and Reliability, Volume 22, Issue 2, pages 67- 120, March 2012.
Download


Paper Citation


in Harvard Style

Dharmapurikar A., J. R. Wierwille B., Ramanthan J. and Ramnath R. (2013). Impact-driven Regression Test Selection for Mainframes . In Proceedings of the 1st International Workshop in Software Evolution and Modernization - Volume 1: SEM, (ENASE 2013) ISBN 978-989-8565-66-2, pages 55-66. DOI: 10.5220/0004601300550066


in Bibtex Style

@conference{sem13,
author={Abhishek Dharmapurikar and Benjamin J. R. Wierwille and Jayashree Ramanthan and Rajiv Ramnath},
title={Impact-driven Regression Test Selection for Mainframes },
booktitle={Proceedings of the 1st International Workshop in Software Evolution and Modernization - Volume 1: SEM, (ENASE 2013)},
year={2013},
pages={55-66},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004601300550066},
isbn={978-989-8565-66-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 1st International Workshop in Software Evolution and Modernization - Volume 1: SEM, (ENASE 2013)
TI - Impact-driven Regression Test Selection for Mainframes
SN - 978-989-8565-66-2
AU - Dharmapurikar A.
AU - J. R. Wierwille B.
AU - Ramanthan J.
AU - Ramnath R.
PY - 2013
SP - 55
EP - 66
DO - 10.5220/0004601300550066