perspective (Linberg, 1999). Recently, empirical
studies (Berntsson-Svensson, Aurum, 2006) have
found that the nature of projects might not be
systematically correlated with success and failure
factors. (Boehm, 1991) includes in his top ten risk
items unrealistic schedules and budgets, but, it was
pointed out that having a schedule is not necessarily
correlated with project success (Verner, Cerpa,
2005): there are successful projects developed in a
shorter time than scheduled. Additionally, it has
been suggested that project success factors are
intertwined with product success factors (Wallin et
al., 2002).
This paper adopts a definition of project success
and failure very close to the Standish-Group’s
definition (Standish Group, 1994), which states that
a project achieves its objectives when it is on time,
on budget, with all features and functions as
originally specified. Concerning this point, let us
recall that the triad cost-schedule-performance of
project management is often termed the “Iron
Triangle” (Williams, Pandelios, Behrens, 1999).
This triadic view will be used in the rest of the
paper.
Our understanding of risk uses a threshold
concept (Linberg, 1999). In order to check whether
or not a project is meeting the stated objectives, we
consider thresholds on budget, schedule, and
required performances. In this way, project success
and failure vary according to how we set these
thresholds. This interpretation allows us to define a
project to be successful projects just on the basis of a
specific aspect (e.g. a project can be considered
successful for budget factor but fail for the required
defect rate). Actually, we are going to show, below,
that this concept can be represented by a function of
many variables, which, somehow, impact on the
metric that we picked out to represent the success.
3 PROBLEM DEFINITION
To define the problem that this paper deals with, let
us consider again equations (1), (1.a) and Figure 1.
In particular, through the Analysis phase (Figure 1),
we can calculate the risk exposure for each risk
element, which we found in the previous phase
(Identify). As we have already mentioned (Section
2.1), this kind of evaluation is based on asking direct
queries to stakeholders, which should provide a
subjective score for each identified risk. This
procedure might not be reliable because
stakeholders’ subjective scores could change over
time based upon their feeling (Tversky, Kahneman,
1974). In other words, we need a control
mechanism. A possible improvement could be using
historical data, if available. The aim would be
building a control mechanism to increase our
confidence on those subjective scores.
When evaluating the overall risk for achieving a
project objective (Iron Triangle), we pick all the risk
exposures and estimates for each project objective
(that is, each element of the Iron Triangle),
calculating the average risk exposure (Roy, 2004).
Generally, this average is weighted according to
information that RM managers can get by historical
data of the organization (Roy, 2004), (Fussel and
Field, 2005). Based on these weighted averages, one
should state strategies and plans to avoid or shrink
the risk occurrence. Actually, over the Control phase
(Figure 1), we are faced with the problem to figure
out whether and to what extent risks have impacted
on project objectives (e.g., scheduling slippage
evaluation). For instance, if risks impacted very
much upon the project, we should get low
probability to be successful. Note that, this
evaluation depends on what we pick out as a basis
for comparison. For instance, let us assume that the
historical data from the considered organization
shows that, usually in similar projects, the
organization got values for the Schedule
Performance Index (SPI) between 0.8 and 0.9− (SPI
is an index that provides information about the
objective to stay on time). Based on the definition of
this index (ratio between what we have done and
what we planned to do), only values greater than 1.0
should be considered as success. Let us assume that,
current project got SPI
Curr
= 0.95. If we picked out
the theoretical value (SPI
theor
= 1.0) as a threshold
for comparison, we would get a failure (because
SPI
Curr
< SPI
theor
). On the contrary, if we picked out,
for instance, the mean value on past projects (e.g.,
SPI
mean
= 0.85), we would get a success. Therefore,
if we calibrate the evaluation criterion on the
observed data then the decisions that we make are
based on information that is as large and updated as
we can. This happens because the object for
comparison is based upon real performances of the
organization. Actually, the problem is more
complex. As a matter of fact, this calibration should
take in account all possible factors (e.g., domain,
complexity, developers’ experience etc…) that
impacts on the considered performance (e.g., SPI).
In other words, we should consider a regression
function (e.g., SPI as a dependent variable and
impacting factors as independent variables). Often,
for the sake of simplicity, those factors are left out;
hence, organizations prefer adopting just theoretical
ICSOFT 2007 - International Conference on Software and Data Technologies
170