innovation and correct elements, which represents
the real outcome. We explore these relations further
in the discussion.
6 LESSONS FROM THE
COMPARISON
A regression analysis was used on the variables for
which correlations stood out. The results are shown
in figure 2. Only links with significant regression
results are shown. Type S is the contribution of
using the computer-aided software BMC over the
paper-based BMC. As already observed, with the
mean values, perceived innovation is slightly better
with the paper-based BMC, but the R-square value is
only 0.10. On the other hand, perceived innovation
strongly predicts perceived outcome. Users of the
digital BMC perceived that it helped them do a
better job more than did the users of the paper-based
BMC. Perceived innovation slightly predicts real
outcome (correct elements), without a difference
between types.
On its own, perceived usefulness is seen as being
better with the digital tool. This could be a bias of
the population of IS students who are familiar with
IT technology and might prefer a technical solution
to one that uses paper.
There is no significant difference between the
type that affected the influence of perceived ease of
use over perceived usefulness. This can be seen as a
positive result for the software tools, because it does
not perform better or worse. Having at least the
same ease of use as paper is a key result, which
should be reflected upon when considering that the
digital tool has the potential of offering additional
features, providing usefulness that is not possible on
paper.
The computer-aided BMC helps to generate
more elements than a paper-based one; however, it
also has a negative influence on the number of
correct elements. It is easier to generate more
elements, but also to generate more wrong elements.
Users who think that the digital tool helps them
innovate, think they have performed better;
however, in our small setup they obtained similar
numbers of correct elements.
In addition to the statistical analysis, we also
observed how the teams worked during the design
task. One observation that is of particular interest
relates to the process of eliciting elements. On the
paper-based BMC, a discussion first occurs and then
a sticky note element is created and positioned. On
the computer-aided BMC, however, which also
supports collaboration, elements are added first by
each member and then changed to reflect the
consensus. This is interesting because recording the
decision inside the tool means that it can be utilized
to better support the ongoing business modeling
collaboration process.
Three weeks after the first task, we carried out a
trial experiment with the coherence guidelines using
paper. The results were varied and inconclusive,
although users did say it helped them improve their
model. Problems arose when attempting to test them
on paper. In this situation, users have to perform the
checks manually; in some instances, they do not take
the time to iteratively do it as soon as they change
something. Therefore we posit that although we
showed that guidelines can be used to create
coherent models on paper, it is more appropriate for
such guidelines to be implemented and tested inside
a prototype tool. Here, they can be recomputed each
time a change is detected.
In summary, in our experiment with our test
group, the tested CAD tool was as effective as
paper-based design for the creation of business
models in terms of eliciting elements of the BMC.
This indicates that with the help of rules, it might be
better suited for testing the coherence of business
models than paper-based design.
7 CONCLUSIONS
To assist BMC design using software tools, we
proposed guidelines that help with elicitation and
testing in order to produce coherent models. Before
implementing such features in a digital tool we
needed to confirm that perception and performance
on a basic BMC design task are at least similar to
those of a paper-based design. With our evaluation
we found that the tested digital tools can be
perceived as useful, and does not perform any worse
than its paper-based alternative. Even if CABMD
did not outperform paper-based design, it shows
some promising results, because such tools can be
extended to offer additional features, thus increasing
their usefulness. Features that are much better suited
for digital tools include the continuous reviewing of
coherence rules to check their validity.
In this paper, we focused on modeling an
existing “as-is”, business model. Further research is
needed to explore options that may enable the
exploration of future “to-be”, business models. For
example, rules could be extended to simulate
financial assumption or validate regulatory
constraints.
Business Model Design - An Evaluation of Paper-based and Computer-Aided Canvases
243