5 CONCLUSION AND FUTURE
SCOPE
The approach of FFBRD discussed in this paper is an
advantageous approach to use models as tools for test
generation. This is a preparedness for test during the
design phase and keeps the parameters of quality in
check. The algorithm saves time and complexity of
test generation from other sources. If automation of
the process is done, adaptability of the method in pro-
cesses like Agile will reduce the drawbacks of testing.
A rigorous study with the practices of industry and the
use of FFBRD in the development process needs to
be done. The efficiency of the method and model can
thus be validated for quality-based development. Ap-
propriate analysis of requirements can generate test
possibility and prioritized requirements. Also, tool
support of automating the process can be identified
to produce relevant regression test cases. A thorough
and systematic research in this direction will make the
studies in reliability testing in Software Engineering
efficient and productive.
REFERENCES
Aichernig, B. K., Mostowski, W., Mousavi, M. R., Tappler,
M., and Taromirad, M. (2018). Model learning and
model-based testing. In Machine Learning for Dy-
namic Software Analysis: Potentials and Limits, pages
74–100. Springer.
Alanen, M. and Porres, I. (2005). Model interchange using
OMG standards. IEEE.
Arora, P. K. and Bhatia, R. (2018). Agent-based regression
test case generation using class diagram, use cases
and activity diagram. Procedia Computer Science,
125:747–753.
Chourey, V. and Sharma, M. (2015). Component based reli-
ability assessment from uml models. In 2015 Interna-
tional Conference on Advances in Computing, Com-
munications and Informatics (ICACCI), pages 772–
778. IEEE.
Chourey, V. and Sharma, M. (2016). Functional flow di-
agram (ffd): semantics for evolving software. In
2016 International Conference on Advances in Com-
puting, Communications and Informatics (ICACCI),
pages 2193–2199. IEEE.
Dick, J., Hull, E., and Jackson, K. (2017). Requirements
engineering. Springer.
Felderer, M. and Herrmann, A. (2018). Comprehensibility
of system models during test design: a controlled ex-
periment comparing uml activity diagrams and state
machines. Software Quality Journal, pages 1–23.
Fraser, G. and Arcuri, A. (2015). Achieving scalable
mutation-based generation of whole test suites. Em-
pirical Software Engineering, 20(3):783–812.
G
´
erard, S. Papyrus user guide series: About uml profiling.
2011.
Gokhale, S. S. and Trivedi, K. S. (2002). Reliability predic-
tion and sensitivity analysis based on software archi-
tecture. In 13th International Symposium on Software
Reliability Engineering, 2002. Proceedings., pages
64–75. IEEE.
Go
ˇ
seva-Popstojanova, K. and Trivedi, K. S. (2001).
Architecture-based approach to reliability assessment
of software systems. Performance Evaluation, 45(2-
3):179–204.
Gupta, N., Yadav, V., and Singh, M. (2018). Automated
regression test case generation for web application: A
survey. ACM Computing Surveys (CSUR), 51(4):87.
Jena, A. K., Swain, S. K., and Mohapatra, D. P. (2014).
A novel approach for test case generation from uml
activity diagram. In Issues and challenges in intel-
ligent computing techniques (ICICT), 2014 interna-
tional conference on, pages 621–629. IEEE.
Kandil, P., Moussa, S., and Badr, N. (2017). Cluster-based
test cases prioritization and selection technique for ag-
ile regression testing. Journal of Software: Evolution
and Process, 29(6):e1794.
Kim, J., Jeong, H., and Lee, E. (2017). Failure history data-
based test case prioritization for effective regression
test. In Proceedings of the Symposium on Applied
Computing, pages 1409–1415. ACM.
Luo, L. (2001). Software testing techniques. Institute for
software research international Carnegie mellon uni-
versity Pittsburgh, PA, 15232(1-19):19.
Lyu, M. R. et al. (1996). Handbook of software reliabil-
ity engineering, volume 222. IEEE computer society
press CA.
McInnes, A. I., Eames, B. K., and Grover, R. (2011). For-
malizing functional flow block diagrams using pro-
cess algebra and metamodels. IEEE Transactions on
Systems, Man, and Cybernetics-Part A: Systems and
Humans, 41(1):34–49.
Mingsong, C., Xiaokang, Q., and Xuandong, L. (2006). Au-
tomatic test case generation for uml activity diagrams.
In Proceedings of the 2006 international workshop on
Automation of software test, pages 2–8. ACM.
Mostafa, S., Wang, X., and Xie, T. (2017). Per-
franker: Prioritization of performance regression tests
for collection-intensive software. In Proceedings of
the 26th ACM SIGSOFT International Symposium on
Software Testing and Analysis, pages 23–34. ACM.
Nelson, W. (1983). How to analyze reliability data, vol-
ume 6. Asq Pr.
Nelson, W. B. (2009). Accelerated testing: statistical mod-
els, test plans, and data analysis, volume 344. John
Wiley & Sons.
Petke, J., Cohen, M. B., Harman, M., and Yoo, S. (2015).
Practical combinatorial interaction testing: Empirical
findings on efficiency and early fault detection. IEEE
Transactions on Software Engineering, 41(9):901–
924.
Pham, T. and Pham, H. (2017). A generalized software reli-
ability model with stochastic fault-detection rate. An-
nals of Operations Research, pages 1–11.
Optimizing Regression Testing with Functional Flow Block Reliability Diagram
531