2.3 Hybrid Approaches
There are also hybrid solutions for generating the
user interface: custom-designed user interfaces can
be associated with services for some types of
devices but if a suitable user interface for a
particular device does not exist, the device can
dynamically generate a user interface from the
service description. (Hodes & Katz, 1999) for
example, proposed an XML-based Interface
Specification Language (ISL) that allows the
specification of methods that can be invoked on the
service and also of user interfaces for that service,
available to be downloaded and executed in the
device. ISL includes a <ui> tag to specify a user
interface for the service, which can be the name of a
component that the device somehow knows about or
a network address where the component can be
found. If no user interface is specified for a service,
or if no suitable one for that particular device is
found, the device can use the ISL to dynamically
create a user interface that allows users to interact
with the exposed methods of the service.
In iCrafter (Ponnekanti et al., 2001), services
register in an Interface Manager and send it a service
description in an XML-based language called
Service Description Language (SDL) – which lists
the operations supported by the service, in a similar
way to ISL. Clients obtain a list of available services
from the Interface Manager and can ask for the user
interface of a specific service (or a combination of
services). When asked for a user interface, the
Interface Manager will search for a suitable interface
generator: it first searches for a generator for that
specific service, then for a generator for that service
interface, and finally for the service-independent
generator. This allows the creation of custom user
interfaces for a service, if the developer chooses to,
but guarantees that a suitable user interface can
always be presented to the user. The interface
generator uses a template to generate code in a user
interface language supported by the controller
device (iCrafter supports HTML, VoiceXML,
SUIML and MoDAL), so controller devices are
assumed to be capable of running a user interface
interpreter that can then render the received user
interface code.
The toolkit presented in this paper draws
inspiration in the dynamic user interface generation
approach and provides web-based user interface
generation for public display applications. Unlike
the approaches presented in this section, our
approach does not require programmers to use an
interface description language to explicitly define
the user interface of the application. Instead, our
toolkit continually gathers information about which
widgets the application has created in order to be
able to replicate them in the dynamically generated
interface.
3 OVERVIEW
OF PUREWIDGETS
PuReWidgets is an interaction toolkit for web-based
public display applications. In this section we briefly
outline its main features and architecture, but a more
in-depth description can be found in (Cardoso &
José, 2012). PuReWidgets provides the following
main features:
Multiple, extensible, widgets. The toolkit is
structured around the concept of widget. It
incorporates various types of interaction widgets that
support the fundamental interactions with public
displays. Existing widgets can be customised and
composed into new widgets, and completely new
widgets can be created by application programmers.
Dynamically generated graphical interfaces. The
toolkit automatically generates graphical user
interfaces for mobile devices (web GUI). It also
generates QR codes for user interaction through
camera equipped mobile devices.
Independence from specific input mechanisms.
and modalities. The toolkit supports several
interaction mechanisms such as SMS, Bluetooth
naming, OBject EXchange (OBEX), email, touch-
displays, in addition to the already mentioned
desktop, mobile, and QR code interfaces. These
input mechanisms are abstracted into high-level
interaction events through the available widgets, so
that programmers do not have to deal with the
specificities of the various concrete mechanisms.
Asynchronous interaction. The toolkit supports
asynchronous interaction, allowing applications to
receive input events that were generated when the
application was not executing on the public display.
Generally, this allows users to send input to any
application configured in a display, including the
ones not currently executing at the display. More
specifically, this can be used, for example, to allow
off-line customisation of an application so that
relevant content is shown to a particular user or
group of users when the application is displayed.
Concurrent, multi-user interaction. The toolkit
supports concurrent interactions from multiple users,
and provides applications with user identification
information that allows them to differentiate user
DynamicGraphicalUserInterfaceGenerationforWeb-basedPublicDisplayApplications
7