Let Q and I be the query image and the dataset
matching painting. Given all the matches q, i be-
tween SURF feature descriptors the goal is to estimate
the homography transformation H
Q,I
such that K
(I)
i
=
H
Q,I
K
(Q)
q
. K
(I)
i
and K
(Q)
q
are the detected SURF in-
terest points of the matching features F
(Q)
q
and F
(I)
i
.
The approach proposed in (Brown and Lowe, 2006)
is exploited to achieve such goal.
Given a point in the original coordinate frame of
the dataset image Q, the inverse transformation matri-
ces H
Q,I
−1
and T
−1
can be used to display it onto the
image region R.
5 HUMAN DEVICE INTERFACE
Given the computed inverse homography transforma-
tions, the HDI module is used to display the infor-
mation related to a painting character through AR.
Standard Human-Computer Interaction methods have
been used to find the correct way display the informa-
tion such that users can easily interact with the user
interface without any cognitive effort. Three differ-
ent user interfaces have been designed and evaluated
respecting the usability rules.
The three proposed user interfaces have been de-
signed as follows: i) painting characters edges are
highlighted with the same color and their names are
shown close to themselves. The end-user can access
character information by selecting the displayed la-
bel. ii) painting characters edges are highlighted with
the same color as before, but name labels are replaced
by blinking white circles. The end-user has to se-
lect the circle to access the character information. iii)
characters edges are displayed with different colors,
and characters silhouettes are overlapped with semi-
transparent coloured and blinking silhouettes. The
end-users have to select the semi-transparent coloured
silhouette to access the character information.
6 EXPERIMENTAL RESULTS
To evaluate the proposed designs two type of tests
have been performed: i) inspection tests and ii) end-
user tests. Inspection tests have been performed by
usability experts without the direct involvement of the
end-users.
Two types of inspection tests have been exploited:
i) heuristics tests, i.e., analytical evaluation tech-
niques that provide opinions, and ii) cognitive walk-
through tests where the HCI experts examine the ele-
25#
22#
20#
10#
16#
8#
5#
2#
0#
5#
10#
15#
20#
25#
30#
Male# Female#
Mean#Age#
Total#
Use#of#smarphone#
Use#of#AR#applica>ons#
Figure 2: Testers’ profiles. 30 testers have been selected to
perform the required task.
mentary actions that end-user needs to take to achieve
a given goal.
The proposed system has been evaluated with a
total of 30 users (Figure 2) without loss of generaliza-
tion (Nielsen and Landauer, 1993). During the brief-
ing participants were informed about the purposes of
the test, the task and its duration. Users were also
asked to fill a screening questionnaire to get infor-
mation about them. The “think-aloud” technique has
been used for test sessions, each of which lasts about
fifteen minutes. After each test a debriefing is ex-
ploited to investigate unusual or interesting events that
occurred.
The first user interface has been designed as
shown in Figure 3(a). Six participant out of ten com-
pleted the given task with an average execution time
of 8’33”. As shown in Figure 4, 25% of the users
that failed to complete the task selected different ar-
eas other than the character labels; 25% selected the
menu button; and the remaining 50% didn’t complete
the task at all. After debriefing, 90% of the partici-
pants was satisfied about the application but 40% of
them stated that the user interface was not clear.
The second designed user interface is shown in
Figure 3(b). Only one tester out of ten failed the test
selecting the menu button. 90% of the testers stated
that the proposed user interface was clear and it was
easy to reach the information related to a character.
One single tester suggested to display the white cir-
cles with different colors. As shown in Figure 5 the
second designed user interface achieves the best per-
formance both in terms of success rate and average
execution times. The average execution time required
to complete the task was about 4.1”.
The third designed user interface (Figure 3(c))
achieved the worst results. Only one tester suc-
cessfully completed the task. According to debrief-
ing questionnaire inspection, 70% of the participants
stated that the interface was not clear and 80% of them
had difficulties in recognizing the silhouette as a se-
lectable element. Most of the testers agreed that the
character recognition task was difficult due the over-
VISAPP2013-InternationalConferenceonComputerVisionTheoryandApplications
120