Towards Interactive Multisensory Data Representations
Susanne Tak and Lex Toet
TNO, Soesterberg, the Netherlands
Keywords: Visualization, Multisensory, Interactivity, Haptification, Sensification.
Abstract: Despite the availability of a broad range of newly developed multisensory displays and interaction
techniques, multisensory interactive data representations are still not widely used. We argue that for
complex information analysis multisensory data representations and multimodal interactivity are essential.
By leveraging the benefits of the individual sensory modalities multisensory representations and interaction
techniques can make the representation and handling of complex data more intuitive and transparent. This
can make complex data analysis accessible to a wider audience, also including non-experts. However, there
is currently a lack of agreed guidelines for their integrated design, as well as little empirical research in this
area. We argue that there is an urgent need for further systematic research into human multisensory
information processing to provide rules that enable the design and construction of representations and
interfaces that achieve optimal synergistic cooperation across sensory modalities.
1 INTRODUCTION
Visualizations are powerful tools that can provide
users with rapid insight into complex data by
mapping data attributes to visual properties such as
position, size, shape, texture, color and animation.
Appropriate visualizations enable users to discern
and interpret data patterns that may otherwise be
hard to distinguish or would even remain unnoticed.
However, visualizations are inherently limited by the
spatiotemporal bandwidth of the visual system, i.e.
by the amount of visual characteristics that can
simultaneously be represented without obscuring or
cluttering the representation.
In this paper, we argue that the representations of
complex data need to be complemented with
multisensory information and/or interactivity.
Though these two solutions are not novel, there is a
lack of guidelines and empirical evidence how such
representations should be designed and constructed
(Sarter, 2006).
The use of multiple sensory modalities in the
representation of complex data can significantly
increase the number of data characteristics that can
be represented and analyzed simultaneously. Over
the last two decades there has been a growing
interest in multisensory data representation or data
sensification. A wide range of auditory, tactile,
olfactory and even gustatory and vestibular display
devices haven been developed (Basdogan and
Loftin, 2008); (Kortum, 2008), which can be used to
present information in situations where the visual
capability is either overloaded or impractical (Loftin,
2003). Sensification can be used to effectively
convey both qualitative and quantitative
information. The results of rigorous sensory
perception studies can be used to develop guidelines
that may serve to design multisensory
representations that optimize information transfer
while avoiding issues with sensory bias and sensory
conflict (Nesbitt, 2005). However, despite their
many potential benefits, multisensory interfaces and
displays are still not widely used for the
representation and exploration of complex data.
2 MULTISENSORY
REPRESENTATION
In this section, we examine the limits of the visual
channel and the potential of non-visual (auditory,
haptic, olfactory, gustatory, vestibular) display
techniques. We argue that multisensory display
techniques can be used to increase the bandwidth of
information transfer.
Multisensory data representations or
sensifications can be achieved by mapping data
parameters not only to visual but also to tactile
558
Tak S. and Toet L..
Towards Interactive Multisensory Data Representations.
DOI: 10.5220/0004346405580561
In Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information
Visualization Theory and Applications (IVAPP-2013), pages 558-561
ISBN: 978-989-8565-46-4
Copyright
c
2013 SCITEPRESS (Science and Technology Publications, Lda.)
(roughness, viscosity, temperature, wetness, air flow
pressure), auditory (pitch, tempo, rhythm, loudness,
timbre), olfactory (valence, intensity), gustatory and
vestibular parameters (Loftin, 2003); for examples
see e.g. Harding et al., 2002; Harding and
Souleyrette, 2010; Newcomb and Harding, 2006;
Ogi and Hirose, 1997). Multisensory displays may
enable the operator to a) acquire a wider range of
details and subtle cues from the display (bandwidth
expansion); b) fill in missing information from one
sensory channel with cues from another sensory
channel (data completion); and c) integrate all these
informative cues from the different senses in an
active and creative manner into a unified coherent
percept (Gestalt forming).
Temporal integration of multisensory signals can
create salient emergent features that automatically
draw attention in cluttered dynamic displays (Van
der Burg et al., 2008, 2009). More specifically,
tactile (Van der Burg et al., 2009) and auditory (Van
der Burg et al., 2008) cues can boost the saliency of
visual features, even when the cues themselves
provide no information about the location or nature
of the visual feature. Multisensory representations
can therefore yield a richer and more coherent
experience, thus enabling users to perceive spatio-
temporal patterns in complex dynamic
multidimensional data that may otherwise be hard to
perceive or even go unnoticed.
2.1 The Limits of Visualization
Visual representations map data parameters to visual
features like color, saturation, intensity, density, and
animation frequency. However, the human visual
system is inherently band limited and suffers from
occlusion, crowding and clutter (Van den Berg et al.,
2009), inattentional blindness and change blindness
(Mancero et al., 2007). Enriching visualizations with
additional sensory modalities may considerably
expand the human information processing
capability.
2.2 Sonification
Sonification is “the use of non-speech audio to
convey information” (Kramer,1993). Sonification
represents data as sound by mapping data parameters
to audio parameters such as pitch, volume, rhythm,
loudness, timbre. Auditory representations can be
used to perform trend analysis, point estimation,
pattern detection, and point comparison (Walker and
Nees, 2005). Encoding uncertainty as an extra layer
of sound enables the interactive exploration of
visualizations with inherent uncertainty (Brown and
Bearman, 2012). Auditory displays can highlight
subtle changes in values, illuminate gradual changes,
present several data-streams concurrently, and
emphasize anomalies and outliers, thereby
complementing visualization methods (Ferguson et
al., 2012). Sonification has successfully been
deployed to represent uncertainty in climate change
predictions (Bearman, 2011), positional uncertainty
(Bearman and Lovett, 2010) and cell normality
(Edwards et al., 2010).
2.3 Haptification
A wide range of haptic data visualization or
haptification techniques and devices have been
developed to let users feel and interact with data (for
recent reviews see Kaber and Zhang, 2011; Paneels
and Roberts, 2010). Haptification of volumetric data
has successfully been applied in virtual reality to
represent molecular models to let users feel the
bonds between molecules and interact with them,
and in virtual surgery to interact in real-time with
virtual organs and feel their deformations. Haptic
models are rendered in terms of forces and
vibrations. Variables that can be used to convey
information are for instance actuator position,
vibration frequency, and surface texture.
Artificial force fields appear intuitively suitable
to represent information uncertainty. For instance,
when a person using a tactile device touches a
surface or volume element the system may signal its
positional certainty through forced feedback (a more
solid or stiffer feel or higher resistance signals high
certainty, a weaker feel or lower resistance
corresponds to a lower certainty: Schmidt et al.,
2004).
2.4 Other Senses
The information transmission capability of the
olfactory sense is still largely unknown (Washburn
and Jones, 2004). This is also true for gustatory and
vestibular parameters (Basdogan and Loftin, 2008).
2.5 Using Multisensory Information for
Data Representation
For a simple 2D threat avoidance task it has been
found that users showed comparable performance
when uncertainty was represented either visually
(color), auditory (tone pitch) vibro-tactile (vibration
amplitude) (Basapur et al., 2003). Previous research
has also shown that users like the ability to receive
TowardsInteractiveMultisensoryDataRepresentations
559
information about ‘invisible’ data via friction or
sound while exploring surfaces (Harding et al.,
2002). Finally, multisensory display systems have
successfully been used to represent and investigate
GIS data (Faeth et al., 2008); (Harding and
Souleyrette, 2010) and medical data (Diepenbrock et
al., 2011). However, there is still little empirical
evidence for the added value of multisensory
representations. Also, it is not yet clear how
information from different senses can best
complement one another, or what happens when two
senses give conflicting information.
The use of multisensory information appears
promising from a theoretical point of view, by
alleviating visualization problems such as occlusion,
crowding and clutter. However, more research is
needed to examine how different data aspects can
best be represented sonically, haptically, or even
olfactory.
3 TOWARD MULTISENSORY
INTERACTION
Tools for visual analysis can be enriched with
interactive elements. Heer and Shneiderman (2012)
introduced a taxonomy of interactive dynamics for
visual analysis. Here, we examine how interactive
multisensory representation can be used to help
users understand and engage with complex data.
Speech and gestures can be used for data and
view specification as well as view manipulation.
Speech interaction is suited for descriptive
techniques, while gestural interaction is ideal for
direct manipulation of objects (Oviatt, 1999).
Speech allows interaction with objects regardless of
their degree of visual exposure (occlusion). It
appears that users prefer using combined speech and
gestural interaction over either modality alone when
handling graphics manipulation (Hauptmann and
McAvinney,1993). While some tasks are inherently
graphical, others are verbal, and yet others require
both vocal and gestural input. Allowing both
interaction types broadens the range of tasks that can
be done intuitively and simultaneously. However, it
has been observed that different contexts pose
different representation requirements, particularly
for data sonifications (Ferguson et al., 2012). Thus,
more research is needed to derive consistent rules
for integrating multiple sensory modalities in a
common interaction framework.
Voice input appears a natural mode for process
and provenance, to allow users to vocally annotate
their observations and replay sequences to other
users. Also, previous work has generated promising
results regarding the use of sonification for
representing uncertainty information.
Olfactory cues can be deployed to provide subtle
feedback while the user is exploring the data. In
contrast, haptic feedback is possibly more suitable
for notification and alerting purposes.
4 CONCLUSIONS
Over the last decades there have been significant
developments in the design and development of user
interfaces deploying individual sensory channels.
The integration of computer vision based techniques
like gaze, gesture and facial expression recognition,
with speech recognition and tactile input devices
will enable the construction of multisensory
interfaces that allow natural and dynamic interaction
with complex data. The availability of intuitive
multisensory interfaces in combination with
appropriate data sensifications will democratize the
analysis of complex data by making it accessible to a
wide audience.
Although preliminary studies on interaction (e.g.
gestural, voice, tactile) and data representation (e.g.
3D, sonification, haptification) techniques for
different individual sensory channels have shown
promising results, these methods have not been
widely integrated and adopted for the representation
and analysis of complex multidimensional data. This
is mainly due to a lack of consistent rules and
guidelines for the integrated design of multisensory
displays (data sensifications) and their user
interfaces. We argue that rigorous user studies are
required to derive guidelines that ensure the
consistency between different information channels.
REFERENCES
Basapur, S., Bisantz, A. M., Kesavadas, T., 2003. The
effect of display modality on decision-making with
uncertainty. Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 47(3), 558-561.
Basdogan, C., Loftin, R. B., 2008. Multimodal display
systems: haptic, olfactory, gustatory, and vestibular. In
D. Schmorrow, J. V. Cohn, & D. Nicholson (Eds.),
The PSI handbook of virtual environments for training
and education: Developments for the Military and
Beyond (pp. 116-134). Westport, CT, USA: Praeger
Security International.
Bearman, N., 2011. Using sound to represent uncertainty
in future climate projections for the United Kingdom.
IVAPP2013-InternationalConferenceonInformationVisualizationTheoryandApplications
560
In Proceedings of ICAD 2011.
Bearman, N., Lovett, A., 2010. Using sound to represent
positional accuracy of address locations. The
Cartographic Journal, 47(4), 308-314.
Brown, E., Bearman, N., 2012. Listening to uncertainty:
Information that sings. Significance, 9(5), 14-17.
Diepenbrock, S., Praßni, J.-S., Lindemann, F., Bothe, H.-
W., Ropinski, T., 2011. Interactive visualization
techniques for neurosurgery planning. In Proceedings
of Eurographics 2011, 13-16.
Edwards, A., Hunt, A., Hines, G., Jackson, V., Podvoiskis,
A., Roseblade, R., Stammers, J., 2010. Sonification
strategies for examination of biological cells. In
Proceedings of ICAD 2010.
Faeth, A., Oren, M., Harding, C., 2008. Combining 3-D
geovisualization with force feedback driven user
interaction. In Proceedings GIS '08. ACM.
Ferguson, S., Beilharz, K., Calò, C., 2012. Navigation of
interactive sonifications and visualisations of time-
series data using multi-touch computing. Journal on
Multimodal User Interfaces, 5(3) 97-109
Harding, C., Kakadiaris, I. A., Casey, J.F., & Loftin, R.B.,
2002. A multi-sensory system for the investigation of
geoscientific data. Computers & Graphics, 26(2), 259-
269.
Harding, C., Souleyrette, R. R., 2010. Investigating the use
of 3D graphics, haptics (touch), and sound for
highway llocation planning. Computer-Aided Civil
and Infrastructure Engineering, 25(1), 20-38.
Hauptmann, A. G., McAvinney, P., 1993. Gestures with
speech for graphic manipulation. Int.J.Man-Machine
Studies, 38(2), 231-249.
Heer, J., Schneiderman, B., 2012. Interactive dynamics for
visual analysis. Communications of the ACM, 55(4),
45-54.
Kaber, D. B., Zhang, T., 2011. Human factors in virtual
reality system design for mobility and haptic task
performance. Reviews of Human Factors and
Ergonomics, 7(1), 323-366.
Kortum, P., 2008. HCI beyond the GUI: Design for haptic,
speech, olfactory, and other non traditional interfaces.
Burlington, MA: Morgan Kaufmann.
Kramer, G., 1993. Auditory display: sonification,
audification, and auditory interfaces. Boston, MA,
USA: Addison-Wesley.
Loftin, R. B., 2003. Multisensory perception: beyond the
visual in visualization. Computing in Science &
Engineering, 5(4), 56-58.
Mancero, G., Wong, W., Amaldi, P., 2007. Looking but
not seeing: implications for HCI. In Proceedings of the
14th European conference on Cognitive ergonomics.,
167-174. ACM.
Nesbitt, K., 2005. A framework to support the designers of
haptic, visual and auditory displays. In Proceedings of
GOTHI 2005, 54-64.
Newcomb, M., Harding, C., 2006. A multi-modal interface
for road planning tasks using vision, haptics and
sound, In Proceedings of ISVC 2006, 417-426.
Springer-Verlag.
Ogi, T., Hirose, M., 1997. Usage of multisensory
information in scientific data sensualization.
Multimedia Systems, 5(2), 86-92.
Oviatt, S., 1999. Ten myths of multimodal interaction.
Comm. ACM, 42(9), 74-81.
Paneels, S., Roberts, J.C., 2010. Review of designs for
haptic data visualization. IEEE Transactions on
Haptics, 3(2), 119-137.
Sarter, N. B., 2006. Multimodal information presentation:
Design guidance and research challenges.
International Journal of Industrial Ergonomics, 36
(5), 439-445.
Schmidt, G. S., Chen, S.-L., Bryden, A. N., Livingston,
M.A., Osborn, B.R., Rosenblum, L.J., 2004. Multi-
dimensional visual representations for underwater
environmental uncertainty. IEEE Computer Graphics
and Applications, 24(5), 56-65.
Van den Berg, R. V., Cornelissen, F. W., Roerdink,
J.B.T.M., 2009. A crowding model of visual clutter.
Journal of Vision, 9(4).
Van der Burg, E., Olivers, C. N., Bronkhorst, A. W.,
Theeuwes, J., 2008. Pip and pop: nonspatial auditory
signals improve spatial visual search. Journal of
Experimental Psychology: Human Perception and
Performance, 34(5), 1053-1065.
Van der Burg, E., Olivers, C. N., Bronkhorst, A. W.,
Theeuwes, J., 2009. Poke and pop: tactile-visual
synchrony increases visual saliency. Neuroscience
Letters, 450(1), 60-64.
Walker, B. N., Nees, M. A., 2005, An agenda for research
and developmentf multimodal graphs (ICAD2005), In
Proceedings of the 11th international conference on
auditory display.
Washburn, D. A., Jones, L. M., 2004. Could olfactory
displays improve data visualization? Computing in
Science & Engineering, 6(6), 80-83.
TowardsInteractiveMultisensoryDataRepresentations
561