and Trefler, 2010; Kharmeh et al., 2011).
Thus the main steps of the overall method are:
1. The requirements, assumed to be refined to fit the
given level of abstraction, are inspected and used
to formulate verification objectives. These are
partitioned into sub-sets that should be checked
by model-analysis, testing, and static code analy-
sis using the most suitable technique for that ob-
jective.
In general our recommendation is to perform anal-
ysis first, and use testing for what cannot be ana-
lyzed. The main argument is that analysis is typ-
ically applicable earlier, and gives higher confi-
dence in the results. However, this must be bal-
anced against the criticality and complexity of the
underlying requirement and system, and the effort
that may be needed to perform formal analysis us-
ing a particular technique and tool versus applying
testing (more critical and complex requirements
suggest analysis). Making the right decision re-
lies of insights by the V&V engineer. In addition,
there are often functional and extra-functional re-
quirements that cannot be checked on the model
level, because the model is not rich or detailed
enough - e.g., one cannot verify timing on a model
that is purely functional.
2. From the requirements and other engineering ar-
tifacts available, a model is constructed for the
V&V task at hand. It is not trivial to make a
good model that is understandable, accurately and
truthfully captures the behavior needed to deter-
mine the selected V&V-objectives, and contains
no irrelevant details (Mader et al., 2007). Further,
it should also be traceable such that each struc-
tural element of a model can be explained and ei-
ther maps to an aspect of the component under
modelling, encodes some implicit domain knowl-
edge, or represents an explicit assumption.
3. After identification of the V&V-objectives and
model-construction, the specific analysis or test
cases are formulated (or generated), and the re-
specive analysis or test step is executed to obtain
results.
4. The results include a verdict for each analysis or
test case together with log-files, computed met-
rics, traces etc. Inspection of the results may
cause different actions depending of the outcome.
If the verdict is pass, it is assumed that the V&V-
objective is verified and sufficient evidence is at
hand to reasonably conclude that it is satisfied
and no further V&V for that is necessary. If
the outcome is fail, corrective actions are needed:
identify cause for discrepancy, and correct all im-
pacted artifacts. Possibly further V&V-objectives
need to be formulated to rule out further similar
defects. If the result is inconc the V&V-objective
(or underlying requirement) needs further checks,
e.g., by alternative techniques or alternative tools
(e.g., simulation, testing, or manual test), or by re-
fining the objective (or requirement) into simpler
sub-requirements. If suspect behavior has been
identified, additional V&V-objectives have to be
formulated to identify whether the behavior is cor-
rect or problematic.
5. The V&V plan must be updated with the new ver-
ification status, and a revised plan for the changed
items must be made. The procedure is iterate until
the V&V-engineer has reached the required confi-
dence level.
There are a number of ways where one (test or
analysis) verification step may benefit from results es-
tablished by another (the exploitation feedback loop).
Some (non-exhaustive) examples are:
Under-approximation: Use (under) approxi-
mate techniques like simulation or statistical model-
checking for the objectives where it turned out that
full analysis is infeasible.
Coverage Completion: Initially test suites are
constructed to cover the requirements (e.g., at least
one test per requirement). Test cases may also be
generated based on (potentially stochastic) simulation
executions of a model. In either case, the resulting
coverage of the model (as measured by a structural
coverage criterion like branch or state-coverage) may
be too low. In this case a model-checker may be
used to synthesize the test cases for the missing cov-
erage items (Blackmore et al., 2012) by interpreting
the counter example as a test case
2
. Similarly, at the
code level, a path synthesis tool based on symbolic
execution may be used to synthesize the missing test
input vector for a given white-box criterion (Gunter
and Peled, 2005; King, 1976).
Targeted A&T: If a test reveals a defect, it may be
worth the effort to target the problematic component
with analysis due to the bug-clustering assumption.
Similarly, if a defect is identified by (model) analysis,
it may be worthwhile to create additional test cases
for that objective to increase confidence of the imple-
mentation. Also historical defect data and inspection
results may be used (Elberzhager et al., 2012).
Model-warmup: A test run/simulation run re-
vealing an interesting situation (like a failing test run)
should be further analyzed by importing this scenario
to the model-checker.
2
Some MBT-tools use model-checkers internally to
reach a criterion a-priori.
TowardsaMethodforCombinedModel-basedTestingandAnalysis
613