Table 9: Results of analysis 1 and analysis 2.
Cover Analysis 1 (k=1) Analysis 1 (k=2) Analysis 1 (k=3) Analysis 2 (k=1)
Uncovered Requirements 8, 14, 18, 19 8, 14, 15, 18, 19 8, 4, 14, 15, 18, 19 8, 14, 18, 19
Approach Test Cases No. Test Cases No. Test Cases No. Test Cases No.
Greedy 5, 9, 15, 21 4 5, 9, 14, 15, 19 5 5, 7, 9, 14, 19, 22, 23, 24 8 3, 4, 9, 13, 15, 21 6
GE 9, 19, 21 3 5, 9, 14, 15, 19 5 5, 7, 9, 14, 19, 22, 23, 24 8 3, 4, 9, 13, 15, 21 6
GRE 9, 19, 21 3 5, 9, 14, 15, 19 5 5, 8, 9, 14, 19, 22, 23, 24 8 3, 4, 9, 13, 15, 21 6
Table 10: Results of analysis 4.
Cover k=1
Uncovered
Requirements
R16c, R16h, R16j
Approach Test Cases No.
Greedy
T1a, T1b, T1c, T1d, T1e, T1f, T1g, T2b,
T2c, T2d, T3a, T3b, T3c, T3d, T3e, T3f,
T3g, T3h, T3i, T3j, T3k, T3l, T4a, T4b,
T5c, T5d, T5e, T6a, T6b, T6c, T6d, T7a,
T7b, T8a, T8b, T8c, T8d, T8e, T8f, T8g,
T8h, T8i, T9a, T9b, T9c, T11a, T11b,
T11c, T11d, T11e, T11f, T11g, T11h,
T11i, T11j, T11k, T12a, T12b, T13a,
T13b, T14a, T14b, T14c, T14d, T14e,
T14f, T14g, T14h, T14i, T14j, T14k,
T15a, T15b, T15c, T15d, T16b, T16d,
T16e, T16f, T16g, T16i, T17a, T17c,
T17d, T17e, T18a, T18b, T18c, T19a,
T19b, T19c, T19d, T19e, T19f, T19g,
T19h, T20a, T21a, T21b, T21c
100
GE Same as Greedy 100
GRE Same as Greedy 100
10 respectively, all approaches output the same num-
ber of test cases to cover the requirements. The early
convergence for k=1 can be attributed to sparsity of
the input matrices and small value of r
overlap
(average
number of test cases which meet a requirement) and a
large number of essential test cases thus leaving very
less alternatives to select.
Figure 3: Run time performance comparison.
Figure 3 presents the run time performance of
Greedy, GE and GRE heuristics in analyses 1 and
2. In analysis 4, Greedy takes 3862.88 µs, GE takes
177.82 µs and GRE takes 2953.98 µs. It can clearly
be observed, in all these analyses, GE performs the
best (GE takes considerably less time than Greedy in
analysis 1 (k=3), analysis 2 and analysis 4 as there
are many essential test cases), followed by Greedy,
and then GRE performs the worst (GRE wastes a
lot of time in pre-processing to remove redundant
test cases). In analysis 4, GRE performs better than
Greedy because its overhead of redundant calculation
is balanced by its essential pass. In software systems,
if r
overlap
is small and many essential test cases exist,
all heuristics will give similar output but run time of
GE will be the least and thus should be preferred.
If more requirements are added, the existing map-
ping can be reused. For this, the mapping of existing
test cases with respect to the new requirements needs
to be filled. Extra test cases to cover them may also
be designed and mapped. Similarly, if a subset of re-
quirements needs to be considered, the existing map
can be reused. The additional requirements and the
corresponding map can simply be removed.
The major limitation of the heuristics used is that
they do not give any weightage to the time required to
run a test case. In future, techniques may be consid-
ered which also focus on the run time of a test case.
6 CONCLUSION
In this paper, Greedy, GE and GRE heuristics have
been successfully used to cover the specified require-
ments (in SRS) of the SUT, the SCAN tool. The mini-
mal cover obtained helps in reducing testing time and
effort while doing regression testing. Existing map-
ping of test cases and requirements can be reused
while adding or removing requirements. k-cover can
be applied if better assurance about coverage is re-
quired at the cost of more testing effort and time.
Learning algorithms can be designed to find an op-
timal value of k. In general, it would be tough to rank
the efficiency of Greedy, GE and GRE heuristics as
they are based on heuristics and none of these is a pre-
cise algorithm. A comparison of output and run time
performance of these heuristics for a given set of test
cases and requirements has been presented for each
analysis done on the SCAN tool. It has been observed
that if r
overlap
is small and there are many essential test
cases, all the heuristics give similar output but GE has
the best run time performance and thus, it is advis-
able to use GE heuristics in such a scenario. Future
work could include an empirical evaluation to find
out the range of r
overlap
and percentage of essential
ICSOFT 2019 - 14th International Conference on Software Technologies
236