possible (Lai and Zakaria, 2010). Thus, bringing
together the sketching flexibility of pen-and-paper
sketching with computer-based modelling.
In this paper, we build on the paper-based SBI
and immersive modelling environments described in
(Bartolo et al., 2008) and (Israel et al., 2013) respec-
tively to create a new SBI that combines 2D sketch-
ing with immersive 3D modelling. This interface dif-
fers from others described in the literature in that 2D
sketching can be performed online within the immer-
sive environment and in an offline environment, such
that 3D models can be projected in the immersive en-
vironment from the user pen-and-paper sketches, thus
creating a hybrid SBI that accepts online and offline
sketching as input. We also report the results of a
user study performedusing both sketching modalities,
hence observing the user’s perception to the new in-
terface.
The rest of this paper is organised as follows: Sec-
tion 2 presents the related work; Section 3 presents
our proposed sketch-based interface; the methodol-
ogy employed for the user evaluation is presented in
Section 4, with results discussed in Section 5, while
Section 6 concludes the paper.
2 RELATED WORK
Sketch based interfaces generally incorporate ges-
tures and sketching to allow the user to create 3D
models from drawings. Gestures, which can be cre-
ated using tools and instruments like pens, can range
from simple editing commands such as the deletion of
strokes, to more complex, 3D modelling commands
such as extrusion and lofting commands (Zeleznik
et al., 2006; Fonseca et al., 2002). To help the user
visualise the effect of the gesture, it is common prac-
tice for SBIs to temporarily visualise the gesture trace
as lines or strokes. Gestures therefore facilitate the
interpretation of the sketch, but require that the user
has a good knowledge of the gestures and their ac-
tions. Thus, sketched based interfaces reach a balance
between sketching freedom and the use of gestures
which aid the interpretation of the sketch.
One such interface is CHATEAUX (Igarashi et al.,
1997) which allows the artist to sketch in 3D, pro-
viding thumbnails with different possibilities with
which a sequence of strokes can be completed. While
such a suggestive interface can help speed up the
modelling process, it is somewhat intrusive, limit-
ing the design exploration to the suggested mod-
els provided by the interface. Less intrusive inter-
faces which also provide more drawing flexibility
are attained through blob-like inflations of 2D con-
tours, such as TEDDY (Igarashi et al., 1999) and
SHAPESHOP (Schmidt et al., 2006) among others.
These allow the designer to create blob-like mod-
els from the contours. By allowing creating mod-
els from sketched contours, these interfaces provide
for a natural drawing style, however, the inflations
used for the 3D modelling limit the applicability of
these interfaces to blob-like models. To amend this,
additional sketched gestures in the 3D space are re-
quired to mold the model into the desired shape. Such
gestures could range from simple inflation or defla-
tion of the blob-like model to more complex defor-
mation tools that are loosely modelled on deforma-
tions that are used to form clay sculptures, with DIG-
ITAL CLAY (Schweikardt and Gross, 2000) and FI-
BREMESH (Nealen et al., 2007) providing examples
of such interfaces.
These sketching modalities can be extended to in-
troduce fully immersive drawing (Perkunder et al.,
2010), (Israel et al., 2013), whereby a rendering sys-
tem and an optical tracking system to allow the user
to sketch and interact with 3D objects in a virtual en-
vironment within a five-sided CAVE. Freehand draw-
ing and modelling are carried out using three tangi-
ble interfaces, namely a stylus to draw virtual ink in
the virtual environment, a pair of pliers which allow
the user to group, reposition and release virtual ob-
jects in the CAVE and a Bezier-tool which allows the
user to extrude a Bezier curve in 3D space, follow-
ing the movement of a two-handed tool (Israel et al.,
2009). Withthis system, users are not restricted to any
particular gestures or sketching language and there-
fore, after overcoming the missing physical sketch-
ing medium, users are allowed greater sketching free-
dom than other interfaces mentioned earlier. More-
over, it has been shown that designers are able to learn
the necessary interaction techniques to interact with
the immersive environment, albeit with a rather steep
learning curve (Wiese et al., 2010).
These interfaces model the 3D geometries incre-
mentally, building the 3D shape as the user sketches
and makes use of gestures. Sketching must therefore
be carried out in an online fashion and, in the par-
ticular case of Israel et al., within the immersive en-
vironment, thus precluding the use of pen-and-paper
sketching. In contrast, Bartolo et al. describe a
sketching interface which infers the 3D geometry of
the sketch in an offline manner, allowing the user to
sketch with real ink on real paper, as well as with dig-
ital ink on graphic tablets. Using this SBI, the user’s
sketch is expected to contain two components namely,
the sketched longitudinal profile of the object, which
defines the object shape, and annotations, which aug-
ment the sketch with additional information about the
InvestigatingUserResponsetoaHybridSketchBasedInterfaceforCreating3DVirtualModelsinanImmersive
Environment
471