consist of an absolute and relative measure. However,
size metrics on the feature (M10) and safety (M34)
levels consist of absolute measures only.
Step 6: Derive Missing Metrics. The result of this
step was an identification of 46 additional metrics that
were incorporated into the overall metric set. Due
to space restrictions, we do not include all the met-
rics that we derived upon identifying the metric gaps.
However, the metrics in red in Table 11 show the new
metrics and we give some examples here:
M52: No. of requirements with in-links from test
cases per feature per baseline.
M54: No. of requirements with in-links from test
cases per safety requirement category per baseline.
M56: Percentage of requirements per feature per
baseline.
M67: Difference between requirements size for
release Z in baselines X and Y
Table 11: Identification of missing metrics from Step 6.
Attribute Level Metrics
Coverage Baseline M1, M2, M3, M4, M5, M6, M7, M8
Coverage Feature M44, M45, M46, M47,M49, M50, M51, M52
Coverage Release M53, M54, M55, M56,M57, M58, M59, M60
Coverage Safety M61, M62, M63, M64,M65, M66, M67, M68
Size Baseline M9
Size Feature M10, M11
Size Release M28, M29
Size Safety M34, M35, M36, M37, M38, M39, M40, M41
Volatility Baseline M12, M13, M14, M15, M16, M17, M18, M19
Volatility Feature M20, M21, M22, M23, M24, M25, M26, M27
Volatility Release M69, M70, M71, M72,M73, M74, M75, M76
Volatility Safety M77, M78, M79, M80,M81, M82, M83, M84
Growth Baseline M42, M85
Growth Feature M43, M86
Growth Release M87, M88
Growth Safety M89, M90
Note: We chose not merge Tables 10 and 11 in order to highlight the metrics
gaps in Table 10.
Step 7: Identify Requirements Meta-data. Based
on the final set of metrics we identify the meta-data
needed to calculate each metric. The set of unique
requirements meta-data items we identified as a re-
sult of identifying the meta-data items for each of the
90 metrics were: Requirement ID, Requirement type,
Requirement feature ID, Requirement text, Require-
ment release number, Safety requirement type, Out-
links from requirements to external artifacts, In-Links
to requirements. As an example, M1 from Table 9
would require an out-links from requirements to ex-
ternal artifacts meta-data item which we call ReqOut-
links for illustration purposes. Thus, the formula for
M1 would be: count if ReqOutlinks 6= NULL
The meta-data items were necessary for ensur-
ing that all meta-data items were consistent across
projects and applying the metrics. This, in turn, fa-
cilitated the measurement procedure.
4.3 Observed Benefits
After illustrating the application of the approach in
one of the rail automation projects, we discuss the
overall benefits we observed from applying the ap-
proach to all the three projects listed in Table 7.
Metric Breadth. While GQM allows the iden-
tification of an initial set of metrics according to a
project’s goals, which, in turn, address the stakehold-
ers’ information needs, our experience with large sys-
tems projects that involve many internal stakeholders
has shown further concerns with regard to the require-
ments metrics are identified upon having an initial set
of metrics, which prompts further metric derivation.
For example, as seen in Table 10, the initial set of met-
rics measured the design and test coverage of require-
ments for a requirements baseline. Upon implement-
ing the metrics, an architect requested measures of
requirements coverage per feature, for which we de-
rived further metrics. However, our approach allowed
us to derived the coverage metrics on the release and
safety levels as well (see Table 11), which were also
used by different internal stakeholders. Thus, our ap-
proach improves the breadth of the derived metrics
by identifying the metric gaps and, subsequently, de-
riving the associated metrics. Because the approach
identifies the metric gaps by analyzing the attributes
and levels of the initial set of metrics that were de-
rived using GQM and which are based on the the
project’s information needs, the missing metrics will
likely also address measurement needs that the inter-
nal stakeholders were not cognizant of.
Organization of Data. Prior to using the approach
and upon deriving the initial set of metrics in the
AR study (see Section 4.1), the measures were doc-
umented in spreadsheets in an unorganized manner
where metrics lacked accurate labels and unrelated
metrics were grouped together. The identification of
attributes and levels in our approach served as a tem-
plate, which allowed us to structure measures in an
organized and consistent format across projects. Fig-
ure 3 shows a snapshot from the requirements met-
ric report for Project P3 in Table 7 in which the
measures are organized according to requirements at-
tributes (size, growth, volatility, status, coverage) and
levels (baseline, feature, release).
Completeness and Consistency of Requirements
Meta-data. Initially, we adopted a tedious trial and
error approach in which we analyzed each project’s
ENASE 2020 - 15th International Conference on Evaluation of Novel Approaches to Software Engineering
200