The GRADE Decision Canvas for Classification and Reflection on
Architecture Decisions
Efi Papatheocharous
1
, Kai Petersen
2
, Jakob Axelsson
1
, Claes Wohlin
2
, Jan Carlson
3
,
Federico Ciccozzi
3
, S
´
everine Sentilles
3
and Antonio Cicchetti
3
1
Swedish Institute of Computer Science, Kista, Stockholm, Sweden
2
Blekinge Institute of Technology, Karlskrona, Sweden
3
M
¨
alardalen University, V
¨
aster
˚
as, Sweden
Keywords:
Software Engineering, Architecture Knowledge, Decision Documentation, Decision Canvas, Decision
Template.
Abstract:
This paper introduces a decision canvas for capturing architecture decisions in software and systems engineer-
ing. The canvas leverages a dedicated taxonomy, denoted GRADE, meant for establishing the basics of the
vocabulary for assessing and choosing architectural assets in the development of software-intensive systems.
The canvas serves as a template for practitioners to discuss and document architecture decisions, i.e., capture,
understand and communicate decisions among decision-makers and to others. It also serves as a way to re-
flect on past decision-making activities devoted to both tentative and concluding decisions in the development
of software-intensive systems. The canvas has been assessed by means of preliminary internal and external
evaluations with four scenarios. The results are promising as the canvas fulfills its intended objectives while
satisfying most of the needs of the subjects participating in the evaluation.
1 INTRODUCTION
The growing complexity and size of modern software-
intensive systems have put architectural design de-
cisions at the forefront of the concerns of software
and system engineers. Many intricate factors need to
be considered in decision-making, such as continuous
deliveries and qualitative large-scale complex systems
to be developed with flexible architectures for future
adaptations and maintenance. An important factor
which allows the efficient adaptation and evolution
of architectures is enabling access to information and
resources to make appropriate decisions in a timely
manner.
Towards achieving this, a decision canvas tem-
plate has been developed within the ORION research
project (http://orion-research.se/). The canvas can be
used for carrying out meaningful discussion among
decision-makers and gives support in the process of
decision-making by introducing the most important
elements to document and describe a decision. The
canvas is also making use of a dedicated decision tax-
onomy (denoted GRADE taxonomy and introduced
in (Papatheocharous et al., 2015)).
In particular, the proposed decision canvas al-
lows software and system engineers to express deci-
sion scenarios in a unique illustrative and structured
manner, by using a template, which takes into ac-
count relevant properties and contextual elements re-
garding a decision. These characteristics comprise
of for example relevant decision requirements, ben-
efits, constraints and contextual information which
once defined, are useful to position decisions in rela-
tion to other decisions made in the complex software-
intensive systems landscape. Once the related ele-
ments to a decision are captured, they can be easily
accessed to classify decisions and serve as facilitators
for decision discussions.
This paper presents the GRADE decision canvas
template, a development motivated by the observa-
tion that architectural decision-making is often con-
ceived as a complex problem with no tangible for-
mulation (Van Vliet and Tang, 2016). Decision prob-
lems and their solutions are difficult to describe, ac-
cess and reuse. Making better decisions heavily de-
pends on the ability to capture, understand and com-
municate the information involving the decisions to
the decision-makers and other respective stakeholder
roles. Yet many decisions lack this support, caus-
Papatheocharous, E., Petersen, K., Axelsson, J., Wohlin, C., Carlson, J., Ciccozzi, F., Sentilles, S. and Cicchetti, A.
The GRADE Decision Canvas for Classification and Reflection on Architecture Decisions.
DOI: 10.5220/0006301301870194
In Proceedings of the 12th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE 2017), pages 187-194
ISBN: 978-989-758-250-9
Copyright © 2017 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
187
ing inability to reflect on past decision-making activi-
ties for future circumstances of architectural knowl-
edge/decision reuse. Given the importance of this
topic and its consequences on the selection of tech-
nological solutions for the development of software-
intensive systems (e.g., software services, compo-
nents, platforms and systems), the lack of support for
capturing, understanding and communicating infor-
mation and the rationale behind decisions is surpris-
ing, even though the complexity and fuzziness of the
topic of architecture decisions for modern software-
intensive systems makes it understandable.
The GRADE decision canvas pursues two objec-
tives: (1) to serve as a common vocabulary and basis
for practitioners to capture, discuss, understand and
communicate decisions, and (2) to serve as a way to
reflect on past decision-making activities for tenta-
tive and concluding decisions in the development of
software-intensive systems. In this respect, this work
presents the decision canvas in detail and also exem-
plifies its usage through a set of preliminary evalu-
ation activities. It discusses how the template can be
used to document decision scenarios by means of con-
crete cases, used as evaluation opportunities, and also
provide a basis for carrying out discussions among
decision-makers.
The remainder of the paper is organized as fol-
lows: Section 2 discusses existing work related to this
contribution, while Section 3 presents the GRADE
decision canvas. Section 4 describes the evaluation
conducted for the canvas. Section 5 concludes the pa-
per and presents future investigation directions.
2 RELATED WORK
A lot of the information involved in architecture
decision-making is sensitive and relatively hard to
access. Decision knowledge regarding current or
past cases of architectural design is of primal im-
portance to improve decision-making, or at least to
avoid falling into the same pitfalls as in the past.
Such knowledge about architectural decisions gets of-
ten lost among practitioners’ meeting minutes, oral
conversations, white boards, reports and note books,
and is not easily accessible to others (researchers, aca-
demics or practitioners).
Several efforts have been focused on the defini-
tion of structures to support capturing of architectural
knowledge. For example, in (Tyree and Akerman,
2005) decision templates are presented in the form
of tables to capture the design rationale and context.
In (Van Heesch et al., 2012a) decision views based
on the conventions of ISO/IEC/IEEE 42010 are de-
scribed to support documentation. In (Van Heesch
et al., 2012b) the system context of decisions is de-
fined by a set of forces affecting the problem, i.e., any
aspect of the problem considered when solving it. In
(Manteuffel et al., 2016) the implementation of a tool
for documenting decisions is presented. The authors
show that it increases quality of decision documen-
tation and productivity, while it is considered highly
useful to software architects. In the above works the
issues of usefulness, perceived ease of use and doc-
umentation effort are raised as primary concerns. As
such, they lack appropriate visualization support and
a common vocabulary as basis. This lack causes in-
ability to communicate and understand decisions as
well as to reflect and discuss on past decision-making
over time.
Understanding and documenting decisions within
the area of architectural design of software-intensive
systems is a great challenge. This is due to the fact
that the fundamental concepts related and affecting
the decision process are inherently complex and dif-
ficult to capture, document and maintain. Some of
these concepts include for example information about
what the decision is about, who is participating in the
decision, what the considered alternatives are, and the
decision rationale. These concepts and specifically:
Goals, Roles, Assets, Decision, and Environment are
the fundamental concepts included in a decision sup-
port taxonomy, denoted GRADE, coined previously
in (Papatheocharous et al., 2015). The GRADE deci-
sion canvas proposed in this paper, makes use of this
taxonomy to support architecture knowledge capture.
In (Tang et al., 2008) the authors showed that
architecture design quality improves when design-
ers are equipped with knowledge on design reason-
ing. A repository to serve knowledge extraction and
evidence-based support for future decision support is
therefore needed. In order to construct the knowl-
edge repository, continuous, reliable, transparent and
practical data collection needs to be established. The
GRADE decision canvas, presented in the next sec-
tion, aims to support exactly this purpose.
3 GRADE DECISION CANVAS
The GRADE decision canvas has been created using
design science (Peffers et al., 2007), in which con-
crete artefacts are created and evaluated in an itera-
tive way. Specifically, we conducted two evaluation
rounds one internal and one external. The GRADE
decision canvas, shown in Figure 1 is used to provide
a useful architectural data collection and communica-
tion mechanism, as well as to carry out discussions
ENASE 2017 - 12th International Conference on Evaluation of Novel Approaches to Software Engineering
188
Figure 1: GRADE decision canvas template.
around the decision.
On the top of the template, meta-information
about the decision is specified: team or project or case
or company name involved in the decision, date and
whether the canvas is the primary one, an alternative
canvas or additional to other canvases.
In the main part of the canvas the left-hand side
depicts the GRADE taxonomy overview diagram,
which illustrates the high level categories used to de-
scribe decisions, along with lists of examples (shown
in gray). These categories are defined in detail in (Pa-
patheocharous et al., 2015), and consist of the follow-
ing:
Goals (G): describes the main goal of the de-
cision as a value which contributes to one or
more perspectives, such as customer-perspective,
financial-perspective, innovation & learning per-
spectives, internal business-perspective, and
market-perspective.
Roles (R): describes the stakeholders of a de-
cision. They are described based on literature
studies in decision-making, i.e., (Tan and Sheps,
1998), (Herrmann et al., 2004) and (Morisset
et al., 2014).
Assets (A): describes the different alternatives or
options of considered assets.
Decision (D) methods and criteria: describes the
way to reach to a decision and the criteria (or
properties) used.
Environment (E): describes the context of the de-
cision one needs to know to understand the deci-
sion case.
Each of these categories is illustrated as a slice in the
GRADE taxonomy overview diagram (left-hand side
of Figure 1). They can be used as a checklist to cap-
ture the important elements in architectural decision-
making in software and systems engineering.
On the right-hand side of the canvas, the following
questions are listed for each GRADE decision canvas
category to capture their most important aspects:
Goal (G) Element: What is the main goal of the
decision? What is the starting point of the deci-
sion? What value is delivered with the decision
and who receives this value? What problem is the
decision solving? What needs are satisfied with
the decision?
Roles (R) Element: Who is mostly involved in
the decision? What are their most important roles
The GRADE Decision Canvas for Classification and Reflection on Architecture Decisions
189
(in project, case, or company)? How much does
each role contribute to the decision? What is the
decision level (strategic, tactical, or operational)?
Which organizational roles are involved? What
are the decision roles (initiator, decider, or influ-
encer)?
Assets (A) Element: List the assets that are con-
sidered within the decision? What usage is envi-
sioned for these assets (reuse, adaptation, buy, or
develop)? Where do assets come/originate from
(COTS, open source, outsource, or in-house)?
What is the type of assets considered (system,
software, service, information, or hardware as-
set)? What main asset characteristics are consid-
ered (performance, cost, or other)?
Decision (D) Element: How do you reach to an
architectural decision? Which are the most im-
portant decision criteria considered (performance,
cost, or other)? Which decision method(s) do you
typically use? How useful do you consider the de-
cision method(s) you typically use? How are the
decision method(s) you typically use integrated
with your decision process/routines? How much
time/cost/resources is needed to make a decision?
Environment (E) Element: Describe the gen-
eral context of the decision to make it un-
derstandable to others. What are the most
important contextual factors of the decision
(related to the product/stakeholders/market &
business/organisation/development technologies
etc.)? Which contextual factors are static (do not
change and will probably not change much in the
near future)? Which contextual factors are likely
to change in the near future?
Decision outcome: What is the final decision
made? How would you evaluate the decision?
Would you change anything in this decision (con-
sider what/how/who/why/when aspects)? Deci-
sion outcome is a required addition to the ele-
ments described in the GRADE taxonomy, to be
able to track outcomes of the decision process to
be used in future decisions.
The questions may be used as guidelines to collect
issues of primary interest in decision cases. These
questions are based on points of interest identified
in previous work investigating architectural decisions
(Petersen et al., 2017). The decision canvas is pro-
vided in an electronic editable form (available here:
http://orion-research.se/GRADE/canvas16.pdf), i.e.,
the gray text is to be replaced by practitioner’s re-
sponses.
The GRADE decision canvas has been evaluated
by means of two preliminary evaluation rounds, de-
scribed in detail in the next section.
4 PRELIMINARY EVALUATION
We present the evaluation method (Section 4.1), and
the results of the evaluation (Section 4.2). First, an in-
ternal evaluation round was carried out (within the de-
velopment team of the canvas) and based on the feed-
back we created a new version of the canvas (as pre-
sented in Section 3). Then, we conducted an external
evaluation round (outside the development team, but
still within the project). The purpose of the evalua-
tion is to stand as static preliminary evaluation with a
limited scope. The plan is to further revise the canvas
based on more types of evaluations, including pilot-
ing in industry, to ensure a mature and useful decision
canvas for architectural decisions in industry.
4.1 Method
The evaluation method includes defining the follow-
ing:
Definition of research questions: The following
research questions were considered to assess the ob-
jectives of the GRADE decision canvas (see Sec-
tion 1): RQ1: Would two individuals, who are in-
dependently classifying the same decision scenario,
reach similar classifications? RQ2: Would individu-
als using the canvas to convey the details of a deci-
sion scenario, find the canvas useful for reflecting on
decision-making activities?
Selection of subjects: The evaluation was done in-
ternally (utilizing researchers who were also creators
of the GRADE taxonomy (Papatheocharous et al.,
2015)) and externally (utilizing researchers who were
not involved in neither the GRADE canvas nor taxon-
omy creation). For the internal evaluation we utilized
some of the co-creators of the GRADE taxonomy, to
identify shortcomings with respect to the usage of the
taxonomy as a common language. They made use
of two decision scenario descriptions produced in the
context of the ORION project and improvements to
the canvas were carried out based on the results of
this internal evaluation. The motivation for utilizing
the creators was that they were already familiar with
the GRADE taxonomy and its terminology and thus
did not require any training. Nevertheless, the risk
is that no other outsider perspective would be consid-
ered, which limits the external validity of the findings.
Thus, in the internal evaluation one researcher and in
the external evaluation two researchers who have not
been involved in the GRADE taxonomy creation were
involved to overcome the limitations.
Data collection: For the internal evaluation we
chose to utilise two decision scenarios to increase the
generalizability of the findings. The scenarios were
ENASE 2017 - 12th International Conference on Evaluation of Novel Approaches to Software Engineering
190
Table 1: Data Collection.
Scenario Scenario description Creator Evaluators
MOPED A decision problem of allocating an image pro-
cessing system to different ECUs
J. Axelsson Internal (K. Petersen and S. Sentilles)
Global A decision problem of outsourcing versus in-
house development
K. Petersen Internal (A. Cicchetti and F. Ciccozzi)
LiICSE Decisions to choose among COTS and OSS com-
ponents
(Li et al., 2006b) External (C. Wohlin and J. Carlson)
LiEMSE Decisions to choose among build or buy options (Li et al., 2006a) External (C. Wohlin and J. Carlson)
described independently by two persons and were
within the domains of automotive (MOPED) and au-
tomation (Global). The scenario descriptions were
9 and 4 pages long (contained about 2650 and 1400
words) respectively. The descriptions were reviewed
independently by two persons whom were not in-
volved in the scenarios’ creation. Using the GRADE
taxonomy overview in the decision canvas (see left-
hand side of Figure 1), they classified the scenar-
ios independently. This was carried out to validate
the consistency of using the GRADE decision canvas.
The right-hand section of the decision canvas was also
used as means for data collection. Table 1, first two
rows, provides an overview, the evaluators and the
authors of the two scenarios (MOPED and Global).
Each evaluator marked on the canvas (either electron-
ically or using pen and paper) the parts in the canvas
they identified as relevant to the scenarios. When they
found information that was not present in the GRADE
taxonomy they added the information as additional
notes (using the provided questions as guidelines).
The information was extracted in a spreadsheet
and thereafter used to determine the consistency with
which the evaluators classified the scenarios. Subse-
quently, the creators and evaluators commented inde-
pendently on the actual usage of the decision canvas
for illustrating the scenario and its usability for re-
flecting the decision for future activities. This feed-
back was used to improve the canvas. If, from the two
evaluators, similar elements are identified then this
was an indication that different persons interpreted
the scenario in the same way.
The external evaluation was performed in a similar
manner by two evaluators carrying out the following
steps:
Select papers for scenario extraction. The first au-
thor of a systematic literature review on architec-
tural decision-making (Badampudi et al., 2016)
was asked to select two papers from the papers in-
cluded in the review that she judged as being rea-
sonable to describe using the GRADE canvas. Not
all papers included in the literature review covered
actual decisions or studies of decisions; some pa-
pers presented methods for making a selection of
a software asset, and these papers are not suitable
for classification using the GRADE canvas. Based
on this, random selection was discarded as a suit-
able method, and hence the first author made an
informed recommendation of articles to use in the
external evaluation. The two papers selected are
presented in the last two rows of Table 1.
Independently extract and classify the information
provided in the selected papers using the same
procedure as for the internal evaluation.
The two evaluators discussed their findings and
observations with each other and with one of the
other authors observing and taking notes.
The two evaluators provided individual sum-
maries of their findings and observations for anal-
ysis and comparison.
Data analysis: To answer the first research ques-
tion (RQ1) we compared the classification of the sce-
narios to determine the similarity. Each element iden-
tified by the first evaluator is counted as part of set A
and each element identified by the second evaluator is
counted as part of set B. We then utilized the Jaccard
index (McCormick et al., 1992) for the comparison,
which is defined as:
J(A, B) =
|
A B
|
|
A B
|
. (1)
The index provides a value of 0 J(A, B) 1, the
closer the index is to 1 the higher the similarity. To
answer the second research question (RQ2), analysis
and synthesis of the feedback received was carried out
and a summary is provided in the results section.
Validity threats: Two main threats have been iden-
tified with respect to the internal and external evalua-
tions. First, the evaluators are members of the project
where the GRADE canvas has been developed, and
they may have acquired some knowledge about the
canvas, even without participating in its development.
Second, K. Petersen (creator of one of the scenarios)
and D. Badampudi (selector of the papers from the
literature review) have both taken part in the devel-
opment of the GRADE taxonomy, and thus there is
a risk that they may have affected the formulation of
the evaluation of the GRADE canvas, i.e., may have
caused to favour aspects from the decisions in the sce-
narios over others. Despite the threats, it was identi-
The GRADE Decision Canvas for Classification and Reflection on Architecture Decisions
191
Goals& Environment& Decision& Asset& Roles& TOTAL&
Internal:MOPED:(Kai;Severine)&
0.50&
0.17&
0.63&
0.54&
0.50&
0.51&
Internal:Global:(Antonio;Federico)&
0.50&
0.50&
0.67&
1.00&
0.82&
0.76&
External:LiICSE2006:(Claes;Jan)&
0.50&
0.48&
0.63&
0.51&
External:LiEMSE2006:(Claes;Jan)&
0.00&
0.00&
0.21&
0.38&
0.00&
0.19&
0.00&
0.10&
0.20&
0.30&
0.40&
0.50&
0.60&
0.70&
0.80&
0.90&
1.00&
Jaccard&Index&measuring&the&similarity&of&the&classifica7ons&
Figure 2: Jaccard Index for the evaluated scenarios and papers.
fied that several aspects are not sufficiently covered
by the canvas, and hence areas for improvement were
identified.
4.2 Results
The results are structured according the research
questions formulated in Section 4.1.
4.2.1 RQ1: Similarity of Classifications
First, we present the quantitative findings and then the
qualitative feedback/reflections provided by the eval-
uators to explain the reasons for the differences and
the avenues for improvements to the canvas.
Quantitative findings: Figure 2 shows the Jaccard
index values for the decisions in the two evaluations.
When the Jaccard index is high then two evaluators
reading the same scenario have a more similar view
about the classification of the scenario. Overall, the
Jaccard index shows rather low values for the con-
sistency among evaluators, both internally and exter-
nally. From the figure it is also visible that the similar-
ity was highest for the Global scenario. The main dif-
ference of the Global and the rest was that the Global
scenario’s description was much more structured, i.e.,
the decision-making steps were grouped and the in-
formation was easier to identify.
Furthermore, it is interesting to observe that the
Goals and Environment dimensions have the lowest
similarity values among the main elements. This is
due to some difference identified when it comes to
the level of detail marked down by the evaluators.
Qualitative reflections: We first explore the rea-
sons for the differences. The internal evaluators iden-
tified that some of the canvas elements’ levels were
not clear. For example, they had difficulty to find all
the categories of Roles applicable. The ordering of
the categories was also found confusing, i.e., Asset
origins, types, and properties. It was also unclear if
the term ”system” includes both software and hard-
ware. Some questions appearing in the right-hand
side of the canvas were also found unclear. Also,
defining the connection between different decision
cases over time was not easy to do. The evaluators
also mentioned that the items’ placement in the right-
hand side of the canvas and the elements on the left-
hand side was not perhaps the optimal one.
Other remarks for improvement included: some
duplicate items appear within the Assets and Deci-
sion criteria (e.g., Cost), and as these are examples,
the canvas could have been better without examples
(i.e., the canvas could have the leaves of the taxon-
omy empty and based on the decision case to be filled-
in). A final remark from the internal evaluation was
to consider renaming Assets to Alternatives.
The external evaluators highlighted that both pa-
pers they used discuss decision-making in general
rather than presenting a specific decision case. Based
on this, it was impossible to make a classification of
the Environments and the Roles involved. It was im-
possible for two reasons, first the demographics are
not described on that level of detail, and second even
if it would have been described the mapping to spe-
cific decisions is not described. Inconsistencies oc-
curred also due to one reviewer highlighting that it
was not possible to use Environment and Roles, while
the other used the GRADE taxonomy to describe the
demographics of the study. Hence, a well-defined unit
of analysis (decision) may be used to increase the con-
ENASE 2017 - 12th International Conference on Evaluation of Novel Approaches to Software Engineering
192
sistency. Some indication of this is provided by the
high values achieved for the internal Global scenario,
which was a structured and focused decision case.
Further issues raised were: priorities are not sup-
ported in the canvas (e.g., with respect to what are the
most important decision criteria). It was unclear why
the last level of Roles (i.e., strategic, tactical, and op-
erational) appears in that category. It was typically
not explicitly stated in the papers if a certain role was
representing the strategic, tactical or operational level.
Hence, this information may be better suited to char-
acterize goals or decisions. The order by which roles
are presented was suggested to be changed, having the
items on the second and third level first.
The structure of the GRADE taxonomy overview
diagram was also a source for inconsistencies. In par-
ticular, items or words were found in more than one
place. It was perceived unnecessary, since the objec-
tive is to support decision-making and not necessar-
ily ensuring completeness with respect to all aspects.
Two examples of redundant information is the pres-
ence of “Open Source” in two levels within Assets,
and the use of “Cost” in different shapes. Cost ap-
pears in the Goals category, but also in the Assets cat-
egory. Though, the goals will drive the aspects to be
considered in both the decisions and the expectations
of the assets. For example, if the goal is to decrease
cost then cost must be a key aspect in the decision and
strongly related to the actual asset. Given the redun-
dancy of items, both reviewers may want to indicate
the same, but in the statistical analysis a disagreement
is shown as they provide their information in different
places of the taxonomy. Thus, in practice the agree-
ment may in fact be higher than indicated by the Jac-
card index.
4.2.2 RQ2: Implications and Usefulness
The internal as well as external evaluators highlighted
the utility of the GRADE decision canvas. We first
present the remarks of the internal evaluators:
Ease of use: The main feedback from the eval-
uators of the canvas was that they found it to be
simple to use, intuitive and practical. The us-
age of the canvas was done without requiring al-
most any guidance. Some clarifications might be
needed though for individuals not familiar with
the GRADE taxonomy. In the future it is sug-
gested that the elements defined in the GRADE
taxonomy should be provided to practitioners to-
gether with the canvas.
Accessibility after classification: Decision cases
and their details captured with the canvas are eas-
ier to read. Reading the decision cases details in
the template format was more intuitive and faster
to process than in a textual descriptive form.
Ability to completely capture a scenario: The au-
thors of the decision scenarios found that their
scenarios were well captured with the canvas by
both individuals to a sufficient level of detail and
that nothing crucial was missing (complete when
doing in pairs). This is why the canvas is sug-
gested to be used for consensus discussions based
on individual usage of the canvas.
The following reflections and usages for the GRADE
decision canvas were presented by the external evalu-
ators:
Ambition of GRADE with regard to completeness:
The GRADE taxonomy cannot ever be complete.
The latter would require including all quality as-
pects covered by all different quality models and
standards. This is infeasible, and hence incom-
pleteness has to be accepted.
Usefulness for describing decisions: Despite the
relative low formal agreement, the canvas (in par-
ticular if improved) is helpful as a tool to describe
a decision. It makes a number of items (words)
explicit and as such a good basis for discussions.
Overall, the impression was that it is a useful tool
to capture important aspects of a decision case.
Facilitator for a discussion around decisions: The
canvas supports the discussion around a decision,
but it is insufficient to describe a decision without
a consensus discussion. In other words, the canvas
is primarily a facilitator for discussions around a
decision. This is further supported by the incon-
sistencies when individual reviewers conduct an
assessment, though the combined results and dis-
cussions allow convergence.
Towards achieving reliability: To accurately doc-
ument and classify decision scenarios the external
evaluators suggested that practitioners should first
use the canvas to capture their own view. Then,
carry out discussion in pairs to achieve more reli-
ability. They highlighted that through a discussion
on the disagreements they would be able to sug-
gest a joint classification that they both agree on.
5 CONCLUSIONS
In this paper we proposed and carried out a set
of preliminary semi-formal evaluation rounds of the
GRADE decision canvas for decision-making support
in architecting software-intensive systems. We found
that, despite the relatively low levels of formal agree-
ment between the evaluators, the GRADE decision
The GRADE Decision Canvas for Classification and Reflection on Architecture Decisions
193
canvas is a helpful tool to describe decisions. It makes
the characteristics of a decision explicit and thus pro-
vides a sound basis for discussions. The GRADE de-
cision canvas supports the discussion around a deci-
sion, but it is insufficient to describe a decision with-
out a consensus discussions. In other words, the
GRADE canvas is primarily a facilitator for discus-
sions around a decision. Thus, the GRADE decision
canvas can be used to illustrate in a comprehensive
and structured way decision scenarios even from dif-
ferent individuals. It can serve as a common vocab-
ulary and basis to capture, understand and communi-
cate decisions, as well as reflect on decision-making.
One important limitation of the canvas is the diffi-
culty to effectively visualize the relations between the
elements describing a decision and even the relations
between different decisions. This makes it difficult to
carry out trade-off and impact analysis of the options
within a decision. We are currently investigating pos-
sible ways to overcome this issue and thereby making
those analyses possible for the GRADE canvas.
ACKNOWLEDGEMENTS
The work is partially supported by a research grant
for the ORION project (reference number 20140218)
from The Knowledge Foundation in Sweden.
REFERENCES
Badampudi, D., Wohlin, C., and Petersen, K. (2016). Soft-
ware component decision-making: In-house, OSS,
COTS or outsourcing - A systematic literature review.
Journal of Systems and Software, 121:105–124.
Herrmann, T., Jahnke, I., and Loser, K.-U. (2004). The role
concept as a basis for designing community systems.
In COOP, pages 163–178.
Li, J., Bjørnson, F. O., Conradi, R., and Kampenes, V. B.
(2006a). An empirical study of variations in cots-
based software development processes in the norwe-
gian it industry. Empirical Software Engineering,
11(3):433–461.
Li, J., Conradi, R., Slyngstad, O. P. N., Bunse, C., Torchi-
ano, M., and Morisio, M. (2006b). An empirical study
on decision making in off-the-shelf component-based
development. In Proceedings of the 28th international
conference on Software engineering, pages 897–900.
ACM.
Manteuffel, C., Tofan, D., Avgeriou, P., Koziolek, H., and
Goldschmidt, T. (2016). Decision architect a decision
documentation tool for industry. Journal of Systems
and Software, 112:181 – 198.
McCormick, W., Lyons, N., and Hutcheson, K. (1992).
Distributional properties of jaccards index of similar-
ity. Communications in Statistics-Theory and Meth-
ods, 21(1):51–68.
Morisset, C., Yevseyeva, I., Groß, T., and van Moorsel, A.
(2014). A formal model for soft enforcement: influ-
encing the decision-maker. In Security and Trust Man-
agement, pages 113–128. Springer.
Papatheocharous, E., Petersen, K., Cicchetti, A., Sentilles,
S., Shah, S. M. A., and Gorschek, T. (2015). Decision
support for choosing architectural assets in the devel-
opment of software-intensive systems: The grade tax-
onomy. In Proceedings of the 2015 European Con-
ference on Software Architecture Workshops, page 48.
ACM.
Peffers, K., Tuunanen, T., Rothenberger, M. A., and Chat-
terjee, S. (2007). A design science research method-
ology for information systems research. Journal of
management information systems, 24(3):45–77.
Petersen, K., Badampudi, D., Shah, S., Wnuk, K.,
Gorschek, T., Papatheocharous, E., Axelsson, J., Sen-
tilles, S., Crnkovic, I., and Cicchetti, A. (2017).
Choosing component origins for software intensive
systems: In-house, cots, oss or outsourcing?–a case
survey. IEEE Transactions on Software Engineering.
Tan, J. K. and Sheps, S. B. (1998). Health decision support
systems. Jones & Bartlett Learning.
Tang, A., Tran, M. H., Han, J., and Van Vliet, H. (2008).
Design reasoning improves software design quality. In
International Conference on the Quality of Software
Architectures, pages 28–42. Springer.
Tyree, J. and Akerman, A. (2005). Architecture decisions:
Demystifying architecture. IEEE software, 22(2):19–
27.
Van Heesch, U., Avgeriou, P., and Hilliard, R. (2012a). A
documentation framework for architecture decisions.
Journal of Systems and Software, 85(4):795–820.
Van Heesch, U., Avgeriou, P., and Hilliard, R. (2012b).
Forces on architecture decisions-a viewpoint. In Soft-
ware Architecture (WICSA) and European Conference
on Software Architecture (ECSA), 2012 Joint Working
IEEE/IFIP Conference on, pages 101–110. IEEE.
Van Vliet, H. and Tang, A. (2016). Decision making in soft-
ware architecture. Journal of Systems and Software,
117:638–644.
ENASE 2017 - 12th International Conference on Evaluation of Novel Approaches to Software Engineering
194