• Combining systematic generation of input se-
quences with a degree of randomness and finally
fuzzing at time boundaries performs better than
pure random test case generation.
Although the experiments were performed on au-
tomotive domain applications, we expect similar ben-
efits on reactive systems belonging to other domains
as well. The current version of the presented tech-
nique faces scalability issues in generating test cases
for applications having large and complex time-based
requirements, as observed in case study 1 with fea-
tures like Flasher and Alarm. Going forward, we aim
to overcome this issue by adding more intelligence
in the mechanism to generate input sequences. We
also targetto enhance the coverage criteria of the tech-
nique by evaluating the effectiveness of various cov-
erage criteria in finding bugs in the system under test,
and enable coverage of long sequences of require-
ments’ interaction.
REFERENCES
(1994). DO-178B: Software Considerations in Airborne
Systems and Equipment Certification.
Anand, S., Burke, E. K., Chen, T. Y., Clark, J., Cohen,
M. B., Grieskamp, W., Harman, M., Harrold, M. J.,
and Mcminn, P. (2013). An orchestrated survey of
methodologies for automated software test case gen-
eration. J. Syst. Softw., 86(8):1978–2001.
Arcuri, A., Iqbal, M., and Briand, L. (2010). Black-box sys-
tem testing of real-time embedded systems using ran-
dom and search-based testing. In Petrenko, A., Simo,
A., and Maldonado, J., editors, Testing Software and
Systems, volume 6435 of Lecture Notes in Computer
Science, pages 95–110. Springer Berlin Heidelberg.
Bokil, P., Darke, P., Shrotri, U., and Venkatesh, R. (2009).
Automatic test data generation for c programs. In
Secure Software Integration and Reliability Improve-
ment, 2009. SSIRI 2009. Third IEEE International
Conference on, pages 359–368. IEEE.
Bowman, H. and Gomez, R. (2006). Discrete timed au-
tomata. In Concurrency Theory, pages 377–395.
Springer London.
Brat, G., Havelund, K., Park, S., and Visser, W. (2000).
Java pathfinder - second generation of a java model
checker. In In Proceedings of the Workshop on Ad-
vances in Verification.
Briand, L. (2010). Software verification - a scalable, model-
driven, empirically grounded approach. In Tveito, A.,
Bruaset, A. M., and Lysne, O., editors, Simula Re-
search Laboratory, pages 415–442. Springer Berlin
Heidelberg.
Cadar, C., Dunbar, D., and Engler, D. R. (2008). Klee:
Unassisted and automatic generation of high-coverage
tests for complex systems programs. In OSDI, vol-
ume 8, pages 209–224.
Cadar, C., Godefroid, P., Khurshid, S., P˘as˘areanu, C. S.,
Sen, K., Tillmann, N., and Visser, W. (2011). Sym-
bolic execution for software testing in practice: Pre-
liminary assessment. In Proceedings of the 33rd Inter-
national Conference on Software Engineering, ICSE
’11, pages 1066–1071, New York, NY, USA. ACM.
Cadar, C. and Sen, K. (2013). Symbolic execution for soft-
ware testing: Three decades later. Commun. ACM,
56(2):82–90.
Chen, T. Y., Kuo, F.-C., Merkel, R. G., and Tse, T. (2010).
Adaptive random testing: The {ART} of test case di-
versity. Journal of Systems and Software, 83(1):60 –
66. SI: Top Scholars.
Cristi´a, M., Albertengo, P., Frydman, C., Pl¨uss, B., and
Monetti, P. R. (2014). Tool support for the test tem-
plate framework. Software Testing, Verification and
Reliability, 24(1):3–37.
Dalal, S. R., Jain, A., Karunanithi, N., Leaton, J., Lott,
C. M., Patton, G. C., and Horowitz, B. M. (1999).
Model-based testing in practice. In Proceedings of the
21st international conference on Software engineer-
ing, pages 285–294. ACM.
Duran, J. W. and Ntafos, S. C. (1984). An evaluation of
random testing. IEEE Trans. Softw. Eng., 10(4):438–
444.
Ferguson, R. and Korel, B. (1996). The chaining approach
for software test data generation. ACM Trans. Softw.
Eng. Methodol., 5(1):63–86.
Hamlet, R. (2002). Random Testing. John Wiley & Sons,
Inc.
Harel, D., Lachover, H., Naamad, A., Pnueli, A., Politi, M.,
Sherman, R., Shtull-Trauring, A., and Trakhtenbrot,
M. (1990). Statemate: A working environment for the
development of complex reactive systems. Software
Engineering, IEEE Transactions on, 16(4):403–414.
Heitmeyer, C., Kirby, J., Labaw, B., and Bharadwaj, R.
(1998). Scr: A toolset for specifying and analyzing
software requirements. In Computer Aided Verifica-
tion, pages 526–531. Springer.
Marinov, D., Andoni, A., Daniliuc, D., Khurshid, S., and
Rinard, M. (2003). An evaluation of exhaustive testing
for data structures. Technical report, MIT Computer
Science and Artificial Intelligence Laboratory Report
MIT -LCS-TR-921.
Offutt, J., Liu, S., Abdurazik, A., and Ammann, P.
(2003). Generating test data from state-based specifi-
cations. Software Testing, Verification and Reliability,
13(1):25–53.
Peranandam, P., Raviram, S., Satpathy, M., Yeolekar, A.,
Gadkari, A., and Ramesh, S. (2012). An inte-
grated test generation tool for enhanced coverage of
simulink/stateflow models. In Design, Automation
& Test in Europe Conference & Exhibition (DATE),
2012, pages 308–311. IEEE.
P˘as˘areanu, C. S. and Rungta, N. (2010). Symbolic
pathfinder: Symbolic execution of java bytecode. In
Proceedings of the IEEE/ACM International Confer-
ence on Automated Software Engineering, ASE ’10,
pages 179–180, New York, NY, USA. ACM.
ENASE2015-10thInternationalConferenceonEvaluationofNovelSoftwareApproachestoSoftwareEngineering
76