students naturally started to use the switching func-
tionality of the interface, thus getting familiar with
the Interpretation Switcher in a concrete application
of musical relevance. In the second part, they were
then asked to give feedback on the usefulness and op-
erability of the interface itself.
The questions of the first part of the question-
naire referred to different sections of the first move-
ment of the Path´etique. As a kind of warming up, we
started with a short section (Section A), which only
consisted of the first three measures, see Fig. 1 (a).
This section was cut out from the nine aligned per-
formances and presented to the students by the Inter-
pretation Switcher interface. Even being rather short,
Section A already offers the pianists a wide range of
interpretation so that the comparison of the different
performances constitutes a musically interesting task.
In the first question (A1), the participants had to rate
the nine different interpretations of Section A with re-
spect to the three musical aspects dynamics, articu-
lation, and agogics. Here, the rating scale ranged be-
tween 1 and 10, where 1 means poor and 10 excellent.
In addition, they had to rate their total impression of
this section’s performances using the same scale. Af-
terwards, in question A2, they had to identify their
own interpretation (if applicable) only by means of
Section A. Then, the performances of Section A were
closed and a different section (Section B) was pre-
sented by the Interpretation Switcher to them. Here,
Section B consisted of the technically more involved
mm. 89-100, see Fig. 1 (d). The students then had to
answer corresponding questions (B1, B2).
At the beginning of the questionnaire, the students
were confronted with different performances of rela-
tively short sections. Here, only switching between
the performances was required to properly answer the
questions—jumping back and forth within a perfor-
mance was not necessary. In this way, the students
became familiar with the basic switching functional-
ity of the interface. In the next stage, they were pre-
sented with the nine performances of the entire ex-
position. They now had to rate their total impres-
sion of the first theme (mm. 11 ff., see Fig. 1 (b)),
of the second theme (mm. 51 ff., see Fig. 1 (c)), and
of the entire exposition (questions E1, E2 and E5).
Here the new challenge concerning the use of the In-
terpretation Switcher was not only to switch between
the different performances but also to find the corre-
sponding entry points of the two themes within the
recordings. Another task (E3), was to order the nine
different interpretations with respect to the tempo (be-
ginning with the slowest, ending with the fastest) in
the second theme. With this task the students had to
constantly switch between and jump within the per-
formances, being forced to use the functionality of the
interface extensively. In question E4, again, they had
to identify their own performance (if applicable) now
having the entire exposition at their disposal.
After finishing the questions on music aspects, in
the second part of the questionnaire the participants
were asked to evaluate the Interpretation Switcher in-
terface. Here, the idea was to let the participants first
use the interface in an application scenario to gather
practical experience without knowing about the final
interface evaluation. In the first question (S1), they
should rate the user-friendliness and the degree of us-
ability of the Interpretation Switcher on the above de-
scribed scale from 1 to 10. We then wanted to know
if there were any problems while using the interface
(S2). Furthermore, the students were asked to com-
ment on possible improvements and to propose ad-
ditional functionalities they would have liked when
working on the first part of the questionnaire (S3). In
a last question (S4), they should sketch possible appli-
cation scenarios where they could imagine to use MIR
user interfaces such as the Interpretation Switcher.
3 EVALUATION
3.1 Performance Evaluation
In the first part of the questionnaire, the partici-
pants had to analyze and compare the different per-
formances against each other. Table 1 presents the
results of question A1, where they had to rate the
nine different performances of Section A with regard
to dynamics, articulation, agogics, and in total. The
first row of Table 1 shows the number of the respec-
tive performance; the values of each column corre-
spond to the respective performance. The second row
shows the ratings with regard to dynamics averaged
over the eight participants. For example, the first per-
formance was rated with a score of µ = 6.63 on av-
erage. The third row shows the standard deviation,
which is σ = 1.19 for the first performance. The fol-
lowing rows of Table 1 are to be read in the same fash-
ion. For example, the participants rated the sixth per-
formance on average with µ = 6.88 (σ = 1.36) with
respect to articulation, whereas the overall impression
of this performance amounts to µ = 6.38 (σ=1.51).
As we can see, the eighth performance was ranked
highest with respect to dynamics (µ = 6.75), whereas
the second one with respect to articulation (µ = 7.00).
The overall rankings for Section A are relatively close
together, which may show that the section was too
short for giving a well-founded evaluation or that it
was played similarly by all students.
CSEDU 2010 - 2nd International Conference on Computer Supported Education
138