cise than in static analysis. Static analysis techniques
determine all possible dependencies based on static
data, e.g. source code. As a result, the range of stati-
cally determined dependencies therefore can be enor-
mous and cope little with dependencies which are rel-
evant for change impact analysis. But the major ad-
vantage is the fact, that implementing dynamic analy-
sis techniques is cheap due to low software instrumen-
tation effort. A disadvantage is that a lot of runtime
data needs to be stored and managed.
Moe et al. developed a method to improve the un-
derstanding and development of distributed systems
(Moe and Sandahl, 2002). The method is based on op-
erational data and includes three steps: 1. Collecting
remote procedure-call during operation, 2. Extracting
trace data for statistics and reconstructing call graphs,
3. Visualizing the data.
Law presents a dynamic slicing technique named
“Program Path Profiling” to identify dependencies
between program units (Law, 2005). Here, the soft-
ware needs to be instrumented to store calls on proce-
dure level.
These approaches are what we interpret as
“cheap”, because dependencies will be examined
by instrumenting the software system. Nevertheless
these approaches only concentrate on the determi-
nation of dependencies without considering their
strength. Moe et al. and Goradia address this issue
(Moe and Sandahl, 2002; Goradia, 1993).
Change Coupling Analysis is a subdiscipline
in the field of MSR (Mining Software Repositories)
and aims at identifying logical couplings between
modules, classes and methods. In the research field
of software evolution, these logical change couplings
are used to identify shortcomings in the architecture
of the software system (Gall et al., 2003). In the
context of software change prediction and impact
analysis, logical change couplings can be used
to supplement physical dependencies (Kagdi and
Maletic, 2007).
The QCR-approach (Quantitative Analysis,
Change Sequence Analysis, Relation Analysis) has
been already applied in various case studies (Gall
et al., 2003; Gall and Lanza, 2006). The approach
was used to learn about the evolution of a software
system based on its (change) history.
Kagdi et al. combine single-version and evolu-
tionary dependencies for estimating software changes
(Kagdi and Maletic, 2007). This approach is particu-
lar interesting, because we also want to combine de-
pendency analysis and MSR-analysis. They hypothe-
size, that combining dependencies out of classical im-
pact analysis approaches (e.g. dependency analysis)
and out of mining software repositories will improve
the support of software change prediction.
3 APPROACH
3.1 Approach Overview
The approach assumes, that the combination of both
dynamic dependency analysis and change coupling
analysis methods result in an overall improvement of
change impact analysis. Kagdi et al. investigated a
combined approach to support software change pre-
diction (Kagdi and Maletic, 2007). Change predic-
tion is one of the tasks you can do with the results of
impact analysis (others will be estimating timetables,
estimating trends of failures, etc). But the proposed
approach and the approach from Kagdi et al. dif-
fers in one main point: Kagdi pursues the paradigm,
that fine-grained analysis on the level of source code
(i.e. analysing source at syntactic level) is necessary
to support change prediction. We strive at increasing
the basic set of dependencies (physical and evolution-
ary) and the precision of these dependencies to avoid
analysing software artefacts on source code level.
Figure 1 on page 3 describes the principal ap-
proach on a coarse level. As a starting point vari-
ous data sources have to be examined using dynamic
dependency analysis and change coupling analysis.
Based on a data warehouse, the framework provides
information on two abstraction levels to support both
groups of users of change impact analysis - develop-
ers and managers.
Based on the text-formulated change requests,
developers have to identify primary affected arte-
facts (initial set). This artefacts are components
and classes. Then the framework proposes artefacts,
which have dependencies to artefacts of the initial set.
These artefacts are the impact set. To consider the
importance and relevance (impact) of dependencies,
the framework calculates the strength of them based
on metrics. These metrics are described in subsection
3.2. Then developers confirm or reject artefacts of the
proposed impact set. This process of proposing and
confirming or rejecting will take place in several iter-
ations.
Project managers have to be supported in creating
timetables, allocating human resources and estimat-
ing the risk exposure. Based on the risk exposure, the
framework can support quality managers in allocat-
ing quality assurance activities, e.g. on which com-
ponents to concentrate testing resources. The frame-
work has to determine the quality level of the next
ICEIS 2008 - International Conference on Enterprise Information Systems
454