Factors that Complicate the Selection of Software Requirements
Validating Factors from Literature in an Empirical Study
Hans Schoenmakers
1
, Rob Kusters
2,3
and Jos Trienekens
3
1
Software Development Centre, Ricoh Europe, Magistratenlaan 2, 5223 MD ’s-Hertogenbosch, The Netherlands
2
Management, Science and Technology, Open Universiteit Nederland, Heerlen, The Netherlands
3
Industrial Engineering & Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands
Keywords:
Requirements Selection, Prioritization, Software Release Planning, Stakeholders, Stakeholder Salience,
Systematic Literature Review.
Abstract:
In market-driven software product development, new features may be added to the software, based on a col-
lection of candidate requirements. Selecting requirements however is difficult. Despite all work done on this
problem, known as the next release problem, what is missing, is a comprehensive overview of the factors that
complicate selecting software requirements. This paper aims at getting such overview. The authors performed
a systematic literature review, searching for occurrences in the literature where a causal relation was sugge-
sted between certain conditions and the difficulty of selecting software requirements. Analyzing 544 papers
led to 156 findings. Clustering them resulted in 33 complicating factors that were classified in eight groups.
The complicating factors were validated in semi-structured interviews with twelve experts from three diffe-
rent industrial organizations. These interviews consisted of questions about participant’s experiences with the
complicating factors, and of questions how these factors complicated selecting requirements. The results aid
in getting a better understanding of the complexity of selecting requirements.
1 INTRODUCTION
In market-driven software product development one
frequently sees that new features are added to the soft-
ware, based on a collection of yet unfulfilled require-
ments (Fogelstr
¨
om et al., 2009; Regnell and Brink-
kemper, 2005). Selecting requirements for the next
or, more in general, future software release, is a ne-
cessary but difficult task (Wnuk et al., 2015; Bagnall
et al., 2001; Li et al., 2017). Given a collection of
requirements, the organization is faced with the chal-
lenge of making a selection of the ones with highest
priority, and skip or postpone the rest (Ruhe et al.,
2002); a major theme in the area of release planning
(Ruhe, 2005; Greer and Ruhe, 2004). In a context
with many requirements, a multitude of stakeholders
of different salience (Mitchell et al., 1997), multi-
ple decision makers and changing circumstances, se-
lecting requirements is difficult; a difficulty, known as
the next release problem (Bagnall et al., 2001). Con-
sidering that developing the right product is essential
and that developing software products uses scarce re-
sources (Kabbedijk et al., 2010; Berntsson Svensson,
2011), it is evident that selecting the right require-
ments is important. This is even more the case be-
cause the effects of made decisions in selecting re-
quirements will be felt (much) later and wasted ef-
fort cannot be undone. The lack of understanding
which requirements to select justifies the question:
what makes selecting requirements difficult? (Wohlin
and Aurum, 2005; Barney et al., 2009). This problem
has been addressed by different authors from different
perspectives. A comprehensive overview of the fac-
tors that complicate selecting software requirements
in a context of release planning, however, is missing.
This paper documents a research that, starting with
a systematic literature review (SLR), aims at getting
such overview. It aids in getting a better understan-
ding and may be used in initiatives to improve the
practice of selecting requirements. Selecting requi-
rements is done for various reasons. First, the col-
lection of candidate requirements is usually much lar-
ger than what can be accomplished with the availa-
ble resources (Li et al., 2007; Berander and Andrews,
2005; Sivzattian and Nuseibeh, 2001). Second, the
requirements should not necessarily be developed in
just one release. There has been a shift from de-
veloping infrequent releases, covering many require-
Schoenmakers, H., Kusters, R. and Trienekens, J.
Factors that Complicate the Selection of Software Requirements - Validating Factors from Literature in an Empirical Study.
DOI: 10.5220/0006836702130220
In Proceedings of the 13th International Conference on Software Technologies (ICSOFT 2018), pages 213-220
ISBN: 978-989-758-320-9
Copyright © 2018 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
213
ments, to developing frequent releases with few re-
quirements (Fowler and Highsmith, 2001; Greer and
Ruhe, 2004). Agile methods, supporting incremen-
tal and iterative development, appear to replace the
Waterfall method (Royce, 1970; Racheva et al., 2010;
Turk et al., 2014). Third, some requirements should
not be developed at all, for example, requirements that
have a negative return on investment or requirements
that do not comply with the long-term product stra-
tegy (Regnell et al., 1998).
The difficulty of selecting the requirements with
highest priority is the topic of this paper. In
Section 4.1, a research question is formulated:
RQ1: “Which factors complicate selecting require-
ments from a collection of candidate requirements?”.
In Section 4.3, a research question is formulated
to guide the validation of the found factors: RQ2:
Are the found and classified factors experienced in
practice, and if so, how do they complicate selecting
software requirements?”. The remainder of this paper
documents the SLR, the classification of complicating
factors, and the survey that validates these factors.
2 RELATED WORK
Agile methods, allowing for frequent software rele-
ases, covering relatively few requirements have, to
some extent, replaced the Waterfall method (Royce,
1970) that relied on infrequent releases of a large
number of requirements. One of the consequences
of this trend of developing software products in dif-
ferent releases is the selection of requirements that
have highest priority, from the collection of candidate
requirements. So far, no clear-cut method is availa-
ble to determine which those requirements are. This
section discusses research on this issue, thus identi-
fying the gap that exists between the factors that com-
plicate selecting software requirements, and what is
known about those factors. For each research area,
main contributions to the aspect of selecting require-
ments are briefly listed, together with limitations to
this aspect. The gap is determined by analyzing these
limitations.
STAKEHOLDER THEORY. The role that stakeholders
play in the selection of requirements follows from the
role that stakeholders play for the organization. This
role has been the subject of Stakeholder Theory (Free-
man and McVea, 2001). The notion that the relevance
of the stakeholders depends on a number of attributes
(power, urgency, legitimacy) has made clear that not
every stakeholder has the same relevance (‘salience’)
to the organization (Mitchell et al., 1997). For the
selection of software requirements, it can be conclu-
ded that stakeholder identification and assessing their
importance is important. Stakeholder Theory and the
concept of stakeholder salience however do not ad-
dress the difficulty of determining who the stakehol-
ders are nor what their salience is.
PROSPECT THEORY. With Prospect Theory, Kah-
neman addressed the limitations of Expected Utility
Theory (Mongin, 1997); a theory that until then was
generally accepted as a normative model of rational
choice (Kahneman and Tversky, 1979). Where Ex-
pected Utility Theory assumes a rational decision ma-
ker, Prospect Theory recognizes that there are cir-
cumstances that cause that decision makers make see-
mingly irrational choices. The theory explains for ex-
ample why organizations prefer requirements that de-
liver profit over requirements that avoid loss. Prospect
Theory has contributed much to the understanding of
some, but not all, of the problems, related to selecting
requirements. It does not address issues like quality
of requirements, requirements dependencies, the dyn-
amic of market-driven software development, to name
a few.
THEORY W: MAKE EVERYONE A WINNER. By
introducing Theory W: “make everyone a winner”
Boehm takes the standpoint that, by creating win-win
situations, “every stakeholder should win” (Boehm
and Ross, 1989). Based on this standpoint, Boehm
constructed a method, ‘WinWin’ that takes all stake-
holders into consideration. The method has been ex-
tended with quantitative methods to allow for better
and more objective decisions (Ruhe et al., 2002).
SELECTION CRITERIA. Aurum and Wohlin inves-
tigated whether criteria could be defined that would
help in prioritizing requirements (Wohlin and Aurum,
2005). They found that organizations have preferen-
ces for certain types of requirements. In general,
they prefer business and management requirements
over system requirements; a seemingly irrational pre-
ference. The authors recognize the importance of sy-
stem requirements but argue that the decisions about
such requirements should be handled “within the de-
velopment and evolution of the software”.
RELEASE PLANNING. Unlike the foregoing discus-
sed work, release planning specifically addresses the
problem of selecting requirements for the next soft-
ware release. Release planning has been described
as “to decide upon the most promising software rele-
ase plans while taking into account diverse qualitative
and quantitative project data” (Ruhe, 2005). Release
planning assumes that software is developed in series
of releases with additive functionality, and aims at se-
lecting the right software requirements for the next re-
lease. Release planning is characterized as a ‘wicked
ICSOFT 2018 - 13th International Conference on Software Technologies
214
problem’ (Carlshamre, 2002), referring to a class of
essentially unsolvable problems (Rittel and Webber,
1973).
Ruhe identifies ten difficulties, related to rele-
ase planning (Ruhe, 2005): (1) features are not well
specified and understood, (2) insufficient stakehol-
der involvement, (3) change of features and requi-
rements, (4) size and complexity, (5) uncertainty
of data, (6) lacking availability of data, (7) con-
straints —mostly resources, schedule, budget and ef-
fort, (8) unclear objectives, (9) efficiency and effecti-
veness, (10) lacking tooling support.
Khurum et al. identify six factors that, in a context
of requirements triage and selection, complicate se-
lecting requirements (Khurum et al., 2012): (1) diffi-
culty in alignment of requirements with long-term bu-
siness goals, (2) requirements dependencies, (3) diffi-
culty in improving requirements triage decision qua-
lity, (4) difficulty in comparing functional and non-
functional requirements, (5) creation of product va-
lue, (6) difficulty in selection of prioritization techni-
ques.
There is however only a relatively small overlap
between the overview of (Ruhe, 2005) and (Khurum
et al., 2012), suggesting the existence of more factors.
The discussed research directions have all contribu-
ted to the understanding which aspects are relevant
for selecting software requirements. Since —with the
exception of release planning— they focus only on
one aspect of the decision-making process, it is not
surprising that they do not attempt to cover all aspects
that complicate selecting software requirements.
To the best of the authors’ knowledge, no com-
prehensive overview exists of factors that complicate
selecting software requirements. The systematic lite-
rature review, the subsequent classification and vali-
dation resulted in such an overview, attempting to fill
this gap.
3 METHODOLOGY
In order to obtain a reasonably complete and valid set
of factors, a three-step strategy was chosen: (1) Per-
form a systematic literature review to get a good pic-
ture of what was covered by the literature on the topic
of selecting software requirements. This part would
result in finding occurrences in the literature where
complications of selecting requirements were addres-
sed. (2) Classify the findings —many, unsorted, with
overlaps and duplicates— in order to obtain disjunct
groups of complicating factors. (3) Perform a survey
to determine if the found factors can be observed in
practice.
The execution of the methodology and the results
are addressed in the Results section.
4 RESULTS
4.1 The Systematic Literature Review
The SLR was performed conform Kitchenham’s
guideline (Kitchenham and Charters, 2007), and com-
prised the following tasks: (1) identifying the need
for review, (2) developing a review protocol, (3) se-
arching and analyzing promising papers, (4) defining
and monitoring the stop condition.
IDENTIFYING THE NEED FOR REVIEW. Since an
SLR may not be needed nor justified if one has been
done before (Kitchenham and Charters, 2007), the li-
terature was searched to find if such review had been
performed before. Despite the vast number of publi-
cations related to the next release problem (e.g. (Woh-
lin and Aurum, 2005; Ruhe, 2005; Bagnall et al.,
2001)), none of them qualified as an SLR —nor pre-
tended to be one.
DEVELOPING A REVIEW PROTOCOL. A review pro-
tocol was compiled that would guide the SLR and as-
sure its validity and reliability, covering following ele-
ments: (1) formulating a research question, (2) de-
fining a search strategy, compiling a list of search
terms and deciding about resources, (3) study se-
lection criteria, procedures, (4) Study quality asses-
sment, (5) data extraction strategy.
Research question: The goal of the SLR was formali-
zed with following research question:
RQ1: “Which factors complicate selecting re-
quirements from a collection of candidate re-
quirements?”
Interpretation of the used terms: (1) the word ‘col-
lection’ is used to emphasize that no particular struc-
ture is assumed, nor ordering or the absence of dupli-
cates. The only made assumption is that requirements
are collected somehow. (2) with ‘selecting from a col-
lection’, a process is assumed in which all members of
the collection are considered, and are selected or not
selected —for a coming software release. No assump-
tion is made that this selection is done in one step. It
could be done in multiple steps, for example via early
requirements triage (Khurum et al., 2012). (3) IEEE
defines requirement as “(3.1) A condition or capabi-
lity needed by a user to solve a problem or achieve an
objective. (3.2) A condition or capability that must be
met or possessed by a system or system component
to satisfy a contract, standard, specification, or other
Factors that Complicate the Selection of Software Requirements - Validating Factors from Literature in an Empirical Study
215
formally imposed documents. (3.3) A documented re-
presentation of a condition or capability as in (3.1)
or (3.2)”. In the applicable release planning context,
the requirements may be somewhere in the transition
from being an abstract concept (interpretation 3.2, in
the foregoing) and being a documented representation
(interpretation 3.3).
Search strategy, search terms, resources: searching
for literature was done in multiple ways: (1) by sear-
ching with Google Scholar, (2) by following referen-
ces, encountered in found literature, (3) by searching
for particular keywords, encountered in found litera-
ture.
Searching literature implies having to construct
search queries. It was argued that, due to the wide
scope of the problem, searching with terms, derived
from ‘selecting software requirements’ would lead to
an abundance of irrelevant results. To overcome this
difficulty, a conceptual model (CM) was constructed,
using terms from the problem context, augmented by
terms, derived from related work (see Section 2), in
particular from (Mitchell et al., 1997; Freeman and
Reed, 1983; van de Weerd et al., 2006). The CM was
implemented as information model, in the style of the
NIAM natural language information method (Halpin,
1998). Search queries were created from keywords
that followed from the resulting interconnected set of
concepts.
References, found in literature items that addres-
sed particular complications, extended the set of li-
terature to be reviewed. Likewise, certain terms,
found in the literature (like WinWin) led to new se-
arch terms, leading to relevant literature on the topic.
Study selection criteria, procedures: in order to judge
whether found literature should be included in the
review, the following study selection criteria were
used: (1) discussing the theory or practice of selecting
software requirements, but avoiding papers that had
a very wide context, like requirements engineering,
(2) articles, conference papers, theses, but no books
(risk of being outdated), (3) not from tool vendors,
expecting them to be less objective. Publication date
was not used as selection criterion, arguing that dated
publications would be recognized —and rejected— at
analysis time.
Study quality assessment: Found papers were inclu-
ded in the set to be analyzed, ‘on face value’. When in
doubt, it was decided to “err on the side of inclusive-
ness” (Okoli and Schabram, 2010), trusting that rea-
ding and analyzing the papers would lead to rejecting
material of lesser quality.
Data extraction strategy: The review was split in two
parts: a part, in which literature was searched, using
the constructed queries, and another part in which
the found literature —that was stored in a document
repository— was analyzed. Care was taken that the
repository was free of duplicates.
SEARCHING AND ANALYZING PROMISING PA-
PERS. It was decided to search literature with Goo-
gle Scholar (GS), a search engine, intended for sear-
ching scholarly literature. Google Scholar was vali-
dated as literature search tool, addressing three antici-
pated threats to validity: (1) GS does not find enough
relevant material, (2) GS finds low-quality literature,
(3) GS finds so much low-quality literature that the
returned results obscure the high-quality material.
Research shows that GS finds enough literature
(DeGraff et al., 2013; Shariff et al., 2013; Beel and
Gipp, 2009; Gehanno et al., 2013; Falagas et al.,
2008). Some authors state that GS does not discrimi-
nate much between older and newer literature (Beel
and Gipp, 2009) or state that GS is somewhat biased
to literature in the English language (Neuhaus et al.,
2006). These concerns are less relevant for this lite-
rature review. (1) obsolete material will be recogni-
zed ‘on face value’, and not be added to the literature
repository and (2) only English literature will be se-
arched. The threat that GS also returns low-quality li-
terature is a reality (Gray et al., 2012; Noruzi, 2005).
This threat was mitigated by verifying the quality of
found literature before adding it to the literature repo-
sitory. The third threat is real too: if much of the re-
turned literature has low quality, the high-quality ma-
terial will be obscured by the low-quality literature.
Investigations that have been done on GS however, do
not suggest that this is the case.
DEFINING AND MONITORING STOP CONDITION.
Even if it would be possible to review all existing lite-
rature, there is no need to do so. Continued searches
will provide more papers on the topic and return more
findings, but on one moment, new findings do not lead
to new insights (Levy and Ellis, 2006). Eventually it
takes too long to find anything new. This means that
one has to define a ‘stop criterion’ to end the search
not too early and not too late. It was decided to stop
when after N consecutive searches no material was
found that brought new insights N being a number
between five and ten.
The SLR was performed by one person. Searching
resulted in 544 papers. These papers were analyzed,
resulting in 156 findings, that is: passages in the ana-
lyzed literature, indicating how some condition com-
plicates selecting software requirements. A finding
would consist of (1) the found fragment of text, (2) a
descriptive label that summarized —as one-liner—
the content of the finding, (3) identification of the lite-
ICSOFT 2018 - 13th International Conference on Software Technologies
216
rature item. The label served as an aid to classify the
findings.
4.2 Classifying the Findings
Reviewing the literature led to statements, pointing
in the direction of factors that complicate selecting
software requirements. Statements were unique or
duplicated or overlapped others to some extent. In
order to arrive at meaningful complicating factors,
they had to be processed somehow. It was decided
to cluster them, using ‘similarity in complicating se-
lecting software requirements’ as the clustering cri-
terion. Since the findings were statements in natu-
ral language, no automated algorithm could be used.
Instead, clustering would rely on human judgment. It
was decided to cluster by card sorting with Metap-
lan, a technique, suitable for this type of data, allo-
wing for group-wise, consensus-based decision ma-
king (Capra, 2005; Dulle and Rauch, 2014). More
specific: open card sorting since initially there would
be no clusters. The clusters, representing complica-
ting factors would emerge while clustering.
The number of found complicating factors was
such that grouping them was needed to make the
results comprehensible. It was decided to use the
technique that was also used for the clustering acti-
vity, and also use ‘similarity’ as classification crite-
rion, arguing that this criterion provided most insight.
Clustering the findings led to 33 complicating factors.
Classification through a Metaplan session (Dulle and
Rauch, 2014), performed with three participants, ex-
perienced in requirements engineering and manage-
ment and/or methodology, led to a grouping of fac-
tors. Table 1 shows the resulting grouping, with the
group as leftmost column, and complicating factor as
rightmost column.
4.3 Validating the Complicating Factors
By determining whether the found factors are indeed
experienced in practice, a survey was performed, thus
validating the results of the SLR. The activity was
formalized with following research question:
RQ2: Are the found and classified factors ob-
served in practice, and if so, how do they com-
plicate selecting software requirements?”
Three software product developing organizations took
part in this activity, each organization being repre-
sented by a few participants, having different roles,
and somehow involved in the selection of software
requirements. The organizations were chosen, taking
into account their expected level of professionalism
with regards to dealing with requirements. Informa-
tion was gathered through semi-structured interviews,
thus combining the advantages of a structured appro-
ach and retrieving information. It was made clear to
the participant that the questions were about the pro-
cess of selecting requirements; not of requirements
engineering in general. At the start of an interview,
unbiased information was received with an open, ex-
plorative question: “Which problems do you expe-
rience with selecting requirements?”. Purpose of this
question was twofold: (1) finding if there were factors
that were so pertinent that participant could immedi-
ately name them, (2) additional check of the comple-
teness of the set of complicating factors. The inter-
view continued by addressing each complicating fac-
tor, asking how often a complication was experienced
(values: ‘never’, ‘almost never’, ‘sometimes’, ‘often’,
‘practically always’), and how severe (values: ‘not
at all a problem’, ‘somewhat problematic’, ‘serious
problem’). It was emphasized to the participants that
their answers should be based on own experiences;
not on opinions. Additional questions were asked to
gain a deeper understanding of the problem: “How
does this complicate selecting requirements?”, find
out if solutions or work-arounds were known, or ve-
rify if the participant personally experienced the fac-
tor. It was also asked if the organization had solutions
or work-arounds to mitigate the complication. The or-
der of the factors was randomized to avoid bias, cau-
sed by factors that depend on this order —like tired-
ness of the participant or interviewer.
Twelve participants were interviewed in semi-
structured interviews —each one taking around two
hours. The results of the interview were sent to the
participant for correction, sometimes accompanied
with questions for clarification. Interpretation of the
data: for each factor, the frequency and severity va-
lues of all participants were plotted in a matrix. The
rows represent the severity of the complicating fac-
tor, ranging from ‘Not at all a problem’ to ‘Serious
problem’. When a participant indicated that a fac-
tor was never experienced, the participant was not as-
ked how serious he considered the factor, arguing that
asking this, would be asking for opinions. The co-
lumns represent the frequency of occurrence of the
factor, ranging from ‘Never’ to ‘Practically always’.
The cells of the matrix hold the participants, together
with their organization and role. See Figure 1 for an
example of such a matrix. A ‘gray’ area was defi-
ned of (frequency x severity) values that were con-
sidered complicating. First criterion to decide if a
factor was complicating: enough scores in the gray
area, ‘enough’ being chosen as 6. Second criterion:
enough support from the statements in the comments,
Factors that Complicate the Selection of Software Requirements - Validating Factors from Literature in an Empirical Study
217
made by the participants. The natural language as-
pect prohibited choosing a measurable criterion. It
was observed however that the level of a score did
not always match the textual comments. Some parti-
cipants, for example, gave low scores to severity, but
in their comments they named severe complications,
and added that “such complications are just part of the
job”. The contrary also happened: high scores, but
hardly any comment, supporting these scores. The-
refore the decision about the ‘level of complicated-
ness’ (‘+’: ‘Definitely complicating’, ’: ‘Likely to
be complicating’, ‘-’: ‘Unlikely to be very complica-
ting’) was made by independent review of the scores
and the comments by the authors, followed-up with a
discussion to reach consensus.
Most —but not all— factors were recognized and ex-
perienced by the participants. Table 1 indicates, for
each factor, whether the factor could be validated. It
holds the level of complicatedness (column 2) and
number of scores in the gray are (column 3). The
initial, open question “Which problems do you expe-
rience with selecting requirements?” did not lead to
new complicating factors. This provides support for
the claim that the set of complicating factors is reaso-
nable complete.
Some organizations had found work-arounds or solu-
tions for some of the complicating factors. For exam-
ple, A large number of requirements to select from’
was not experienced by one organization because it
used a way of grouping requirements, thus avoiding a
large list of small, detailed requirements.
Factor 10. Having to plan further than just the current release
Almost
never
Sometimes Often
Practically
always
Never
N/A
Not at all a
problem
K
1m
E
1t
I
1e
J
Et
D
1e
Somewhat
problematic
G
Pt
Serious
problem
B
1t
L
3t
A
2t
F2t C
3e
H
2t
subscript
1
2
LEGEND First letter (capital) represents participant
1:Case1 2:Case2 3:Case3 P:Pilot case E:Expert interview
t:team member m:manager e:external stakeholder (representative)
Figure 1: Example of a matrix with participants’ scores.
5 CONCLUSIONS AND
DISCUSSION
The principal results consist of the found factors that
complicate selecting software requirements. The clas-
sification activity resulted in a logical structure, thus
aiding to the comprehensibility of the results. The
Table 1: Classified complicating factors and survey results.
+ 8 A large number of requirements to select from
+ 4 Requirements, holding a large amount of information
+11 Requirements that are not explicit or not precise
+ 7 Requirements with different levels of abstraction
+ 5 Lack of structure in the requirements engineering process
+10 Lack of understanding how individual requirements
contribute to stakeholders' needs
+12 Lack of understanding what the stakeholders want
+ 6 Difficulty in identifying who the stakeholders are, over
different releases
+ 8 Lack of communication between decision makers and
stakeholders
Legend (R) +Definitely complicating Likely to be complicating -Unlikely to be very
complicating. (S) The number of scores in the gray area (see Figure 1).
REQUIREMENTS
SELECTION
PROCESS
STAKEHOLDER
BALANCING
STAKEHOLDER
COMPLEXITY
ARCHITECTURE
+ 9 Volatility of requirements
+ 5 Difficulty in finding a balance between
precision/completeness of requirements and the effort to
create them
+ 9 Difficulty of getting the right information required for
prioritization
5 Unavailability of suitable tooling to support the selection
task
+ 5 Lack of understanding the goals of the organization
8 Lack of trust that decision makers may have in the results
of prioritization
+10 Lacking availability of resources required for implementing
particular requirements
+12 Difficulty of estimating the effort needed to meet a
requirement
+ 6 Time stress in the process of selecting requirements
+ 8 A time-consuming requirements selection process
+10 Difficulty of balancing conflicting stakes of stakeholders
and resolving resulting conflicts
+ 7 A large number of stakeholders involved in the
requirements selection process
+12 A different degree of importance that different
stakeholders have for the organization
+ 9 Difficulty in accessing or involving the relevant
stakeholders
- 7 Stakeholders in a subcontractor role, insufficiently
understanding the organization's stakes
2 Unpredictable behavior of stakeholders within stakeholder
groups
9 Lack of homogeneity within stakeholder groups
+ 8 Poor personal relationships between decision makers and
stakeholders
+ 6 Requirements that cannot be selected unless other
requirements are also fulfilled (or not fulfilled)
+ 7 Dependencies with other software product family
members
+ 8 Dependencies with products from competitors
+ 3 Having to plan further than just the current release
REQUIREMENTS
ENGINEERING
+10 Different requirements for different market segments,
spreading around the world
CHANGING
ENVIRONMENT
+ 7 Evolving goals, goal priorities, plans and mission of the
organization
STAKEHOLDER
POOR
UNDERSTAN DING
CLASS
R S COMPLICATING FACTOR
follow-up survey confirmed most of the factors: ‘De-
finitely complicating’: 28, ‘Likely to be complica-
ting’: 4, ‘Unlikely to be very complicating’: 1. One
cannot exclude that the factors in the latter two cate-
gories could be complicating in other organizations.
The interviews did not hold any indication of over-
looked complicating factors. Therefore it is conclu-
ded that the research resulted in a valid and substan-
tially complete set of complicating factors. Missed
factors, if any, are not expected to be the most serious
ones. The results of the literature review provide in-
sight and may be used in initiatives to find solutions
to some of the identified problems.
ICSOFT 2018 - 13th International Conference on Software Technologies
218
Looking back at the work of others (see
Section 2), one can see that the results confirm all
difficulties, identified by (Ruhe, 2005) and (Khurum
et al., 2012). The number of found complicating fac-
tors provides support for the claim that release plan-
ning is a ‘wicked problem’ (Rittel and Webber, 1973).
The conceptual model, discussed in Section 4.1,
helped in creating effective search queries. Additio-
nally, it turned out to be a useful tool to determine
whether found literature fitted within the scope. If the
literature item as a whole fitted within the boundaries
of the model, the paper was used and analyzed. If it
didn’t, the paper was not considered any further.
The survey validated most factors. The occur-
rence of the factors appeared to depend on the parti-
cular organization. Some factors that were experien-
ced as problematic in one organization, were not ex-
perienced as such in another organization. It remains
to be investigated whether these differences are situ-
ational or are caused by a different level of process
maturity. They are an indication however that even
the ‘not validated factors’ may be complicating for
certain organizations.
WEAKNESSES OF THE SLR. Reviewing and interpre-
ting the literature was done by the first author only.
Although the results were reviewed and discussed by
the other authors, the quality of interpretation would
have benefitted from a review by multiple reviewers.
WEAKNESSES OF THE CLASSIFICATION. The
choice of the groups is arbitrary to some extent; others
would have selected different labels. It is argued that
this is unavoidable, since the material to be classified
is in natural language. The well-defined classification
criterion mitigated the weakness somewhat.
WEAKNESSES OF THE SURVEY. (1) All participants
were working in organizations in the Netherlands.
The results cannot be immediately extrapolated to the
practice of selecting requirements in organizations in
other parts of the world. (2) Although the results of
the interviews were reviewed and corrected by the
participants, the interviews themselves were not re-
corded. Recording them would have benefitted the
accuracy of the results.
Despite the observed weaknesses, the research has
resulted in a clearer picture of the factors that compli-
cate selecting requirements, and that the literature re-
view has contributed to a better understanding of the
problem.
REFERENCES
Bagnall, A. J., Rayward-Smith, V. J., and Whittley, I. M.
(2001). The next release problem. Information and
software technology, 43(14):883–890.
Barney, S., Wohlin, C., and Aurum, A. (2009). Balan-
cing software product investments. In Empirical Soft-
ware Engineering and Measurement, 2009. ESEM
2009. 3rd International Symposium on, pages 257–
268. IEEE.
Beel, J. and Gipp, B. (2009). Google scholar’s ranking
algorithm: The impact of articles’ age (an empirical
study). In Information Technology: New Generati-
ons, 2009. ITNG’09. Sixth International Conference
on, pages 160–164. IEEE.
Berander, P. and Andrews, A. (2005). Requirements prio-
ritization. In Engineering and managing software re-
quirements, chapter Engineering and managing soft-
ware requirements, pages 69–94. Springer.
Berntsson Svensson, R. (2011). Supporting Release Plan-
ning of Quality Requirements: The Quality Perfor-
mance Model. Lund University.
Boehm, B. W. and Ross, R. (1989). Theory-w software pro-
ject management principles and examples. Software
Engineering, IEEE Transactions on, 15(7):902–916.
7.
Capra, M. G. (2005). Factor analysis of card sort data: an
alternative to hierarchical cluster analysis. In Procee-
dings of the Human Factors and Ergonomics Society
Annual Meeting, volume 49, pages 691–695. SAGE
Publications.
Carlshamre, P. (2002). Release planning in market-driven
software product development: Provoking an under-
standing. Requirements engineering, 7(3):139–151.
DeGraff, J. V., DeGraff, N., and Romesburg, H. C. (2013).
Literature searches with google scholar: Knowing
what you are and are not getting. GSA Today, 23(10).
Dulle, M. and Rauch, F. (2014). Toolbox for school-
community collaboration for sustainable develop-
ment. page 66.
Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., and Pappas,
G. (2008). Comparison of pubmed, scopus, web of
science, and google scholar: strengths and weaknes-
ses. The FASEB journal, 22(2):338–342.
Fogelstr
¨
om, N. D., Barney, S., Aurum, A., and He-
derstierna, A. (2009). When product managers gam-
ble with requirements: Attitudes to value and risk. In
Requirements engineering: Foundation for software
quality, chapter Requirements engineering: Founda-
tion for software quality, pages 1–15. Springer.
Fowler, M. and Highsmith, J. (2001). The agile manifesto.
Software Development, 9(8):28–35.
Freeman, R. E. and McVea, J. (2001). A stakeholder appro-
ach to strategic management.
Freeman, R. E. and Reed, D. L. (1983). Stockholders and
shareholders: a new perspective on corporate gover-
nance. California Management Review, 25(3):88–
106.
Gehanno, J.-F., Rollin, L., and Darmoni, S. (2013). Is the
coverage of google scholar enough to be used alone
Factors that Complicate the Selection of Software Requirements - Validating Factors from Literature in an Empirical Study
219
for systematic reviews. BMC medical informatics and
decision making, 13(1):7.
Gray, J. E., Hamilton, M. C., Hauser, A., Janz, M. M., Pe-
ters, J. P., and Taggart, F. (2012). Scholarish: Google
scholar and its value to the sciences. Issues in Science
and Technology Librarianship, 70(Summer).
Greer, D. and Ruhe, G. (2004). Software release planning:
an evolutionary and iterative approach. Information
and Software Technology, 46(4):243–253.
Halpin, T. (1998). Object-role modeling (orm/niam). In
Handbook on architectures of information systems,
pages 81–103. Springer.
Kabbedijk, J., Wnuk, K., Regnell, B., Brinkkemper, S.,
et al. (2010). What decision characteristics influence
decision making in market-driven large-scale software
product line development? Hildesheimer Informatik-
Berichte, 2010:42–53.
Kahneman, D. and Tversky, A. (1979). Prospect theory: An
analysis of decision under risk. Econometrica: Jour-
nal of the Econometric Society, pages 263–291.
Khurum, M., Uppalapati, N., and Veeramachaneni, R. C.
(2012). Software requirements triage and selection:
state-of-the-art and state-of-practice. In Software
Engineering Conference (APSEC), 2012 19th Asia-
Pacific, volume 1, pages 416–421. IEEE.
Kitchenham, B. and Charters, S. (2007). Guidelines for per-
forming systematic literature reviews in software en-
gineering. Engineering, 2(EBSE 2007-001). EBSE
2007-001.
Levy, Y. and Ellis, T. J. (2006). Towards a framework of
literature review process in support of information sy-
stems research. In Proceedings of the 2006 Infor-
ming Science and IT Education Joint Conference, vo-
lume 26.
Li, C., Van Den Akker, J., Brinkkemper, S., and Diepen, G.
(2007). Integrated requirement selection and schedu-
ling for the release planning of a software product. In
Requirements Engineering: Foundation for Software
Quality, chapter Requirements Engineering: Founda-
tion for Software Quality, pages 93–108. Springer.
Li, L., Harman, M., Wu, F., and Zhang, Y. (2017). The va-
lue of exact analysis in requirements selection. IEEE
Transactions on Software Engineering, 43(6):580–
596.
Mitchell, R. K., Agle, B. R., and Wood, D. J. (1997). To-
ward a theory of stakeholder identification and sa-
lience: Defining the principle of who and what really
counts. Academy of management review, 22(4):853–
886.
Mongin, P. (1997). Expected utility theory. Handbook of
economic methodology, 342350.
Neuhaus, C., Neuhaus, E., Asher, A., and Wrede, C. (2006).
The depth and breadth of google scholar: An em-
pirical study. portal: Libraries and the Academy,
6(2):127–141.
Noruzi, A. (2005). Google scholar: The new generation of
citation indexes. Libri, 55(4):170–180.
Okoli, C. and Schabram, K. (2010). A guide to conducting
a systematic literature review of information systems
research. Available at SSRN 1954824.
Racheva, Z., Daneva, M., Sikkel, K., Herrmann, A., and
Wieringa, R. (2010). Do we know enough about requi-
rements prioritization in agile projects: insights from a
case study. In Requirements Engineering Conference
(RE), 2010 18th IEEE International, pages 147–156.
IEEE.
Regnell, B., Beremark, P., and Eklundh, O. (1998). A
market-driven requirements engineering process: re-
sults from an industrial process improvement pro-
gramme. Requirements engineering, 3(2):121–129.
Regnell, B. and Brinkkemper, S. (2005). Market-driven re-
quirements engineering for software products. In En-
gineering and managing software requirements, chap-
ter Engineering and managing software requirements,
pages 287–308. Springer.
Rittel, H. W. and Webber, M. M. (1973). Dilemmas in a
general theory of planning. Policy sciences, 4(2):155–
169.
Royce, W. W. (1970). Managing the development of large
software systems. In proceedings of IEEE WESCON,
volume 26, pages 328–388. Los Angeles.
Ruhe, G. (2005). Software release planning. Handbook
of software engineering and knowledge engineering,
3:365–394.
Ruhe, G., Eberlein, A., and Pfahl, D. (2002). Quantitative
winwin: a new method for decision support in requi-
rements negotiation. In Proceedings of the 14th in-
ternational conference on Software engineering and
knowledge engineering, pages 159–166. ACM.
Shariff, S. Z., Bejaimal, S. A., Sontrop, J. M., Iansavichus,
A. V., Haynes, R. B., Weir, M. A., and Garg, A. X.
(2013). Retrieving clinical evidence: a comparison
of pubmed and google scholar for quick clinical sear-
ches. Journal of medical Internet research, 15(8).
Sivzattian, S. and Nuseibeh, B. (2001). Linking the se-
lection of requirements to market value: A portfolio-
based approach. In 7th International Workshop on
Requirements Engineering: Foundation for Software
Quality. Interlaken, Switzerland. Citeseer.
Turk, D., France, R., and Rumpe, B. (2014). Assumpti-
ons underlying agile software development processes.
arXiv preprint arXiv:1409.6610.
van de Weerd, I., Brinkkemper, S., Nieuwenhuis, R., Ver-
sendaal, J., and Bijlsma, L. (2006). On the creation
of a reference framework for software product ma-
nagement: Validation and tool support. In Software
Product Management, 2006. IWSPM’06. Internatio-
nal Workshop on, pages 3–12. IEEE.
Wnuk, K., Kabbedijk, J., Brinkkemper, S., Regnell, B., and
Callele, D. (2015). Exploring factors affecting de-
cision outcome and lead time in large-scale require-
ments engineering. Journal of Software: Evolution
and Process, 27(9):647–673.
Wohlin, C. and Aurum, A. (2005). What is important when
deciding to include a software requirement into a pro-
ject or release. In International Symposium on Empi-
ricial Software Engineering, volume 186. IEEE.
ICSOFT 2018 - 13th International Conference on Software Technologies
220