to select the viewport by clicking on it, activating
the Views menu and picking the view they want
from that menu. Method 2 (Figure 6, middle)
requires the user to open a settings window from the
menu. The settings window has four dropdown
menus, one per viewport, and changes the views by
selecting them in the corresponding dropdown
menu. Method 3 (Figure 6, bottom) is the simplest,
requiring the user to click on the symbol at the top
right corner of the viewport and selecting the view
from the menu that pops up under the mouse.
Each viewport can also be panned by dragging
with the middle mouse button. While holding the
mouse over one of the orthogonal views, the mouse
wheel will zoom in and out. Over the two camera
views, the mouse wheel will control the FoV. The
zoom extends button, found in the lower left corner
of the viewport will re-centre the image. For the
orthogonal views it will also set the zoom so that the
whole model fits in the viewport; it will not change
the FoV for the two camera views.
4.4 Projection Control
Apart from the main window, the most important
window is the projection control window (Figure 3,
right). This window contains input boxes, sliders and
buttons, for the field of view, up angle, mapping
mode, radius and eccentricity parameters. The FoV,
up angle, radius and eccentricity parameters are each
controlled by an input box and a slider, the input box
doubling as a display for the current value. The up
angle slider is in the form of a dial. The mapping
mode is controlled by either a single button or one
per mode. The single button displays the current
active mode and clicking on it cycles through the
various modes. The alternative method has a smaller
button per mode, which remains pressed while the
corresponding mode is active. Each button has a
symbol representing the mapping mode but will
display the name of the mode in a tool tip.
5 USER EVALUATION
A user evaluation was carried out to determine the
EPS Visualizer usability, its perceived usefulness,
satisfaction and ease of use, as a tool to support the
design process in architecture. It was also aimed at
making decisions in regards to the various
alternative methods
designed, receiving comments
and suggestions, and identifying aspects to refine.
For a more comprehensive evaluation, this user
study involved users with different backgrounds -
students and experts, in the target audience of
architecture and informatics with experience in
using and developing interactive tools, HCI and
computer graphics.
5.1 Method
The evaluation followed a task-oriented approach
based mainly on Observation, and one-on-one semi-
structured Interviews, followed by a final
Questionnaire aiming at a global opinion on
usefulness of and user experience with the EPS
Visualizer.
Each interview would take place between one
interviewer and one subject, in front of a computer
with two screens, each screen displaying one of the
variant visualizers (VA and VB). After explaining
the purpose of the evaluation and a brief
introduction about the concept behind the Visualizer,
demographic questions were asked, followed by the
presentation of a short video demo (3 min) about the
Visualizer, around 7 min of free exploration, and a
set of 4 main tasks with the two versions of the
software, each version implementing different
options in regard to the alternative interfaces.
The fisrt three tasks, and their subtasks, took the
user through each feature and variant of the
interface. If a particular feature had a variant, then
the user would be prompted to try both versions and
indicate their preference. The order in which the
variants were tried was defined at the beginning of
the interview, being chosen alternately so as not to
favour one of them over the other. The fourth task
was more complex and free-form, requiring the
users to use the interface to reach a goal, in the form
of an image that they were to reproduce with the
visualizer. By this point, they would already be quite
familiar with the basic features of the interface.
During the tasks, the interviewer observed and
registered whether each sub-task was successful,
ocurring errors, hesitations, performance, comments,
and the resulting image of the fourth task (exported
from the Visualizer). At the end of each of the main
sub-tasks, users provided a 1-5 (very bad – very
good) rating based on the USE questionnaire (Lund,
2001), for Usefulness, Satisfaction and Ease of use.
Finally, users filled in a questionnaire about their
experience with the software, focusing on their
global opinions and including the well known SUS:
System Usability Scale (Brooke, 1996) usability
questions, due to its simplicity and robustness,
allowing some standardized usability measures that
are considered reliable. The answers were mostly in
the form of a 1-5 scale (‘never’ to ‘always’; or ‘very
InteractiveVisualizerfortheExtendedPerspectiveSystemasSupportforArchitecturalDesign
459