lines. In Figure 8 the collaborative work on such
seismic lines is shown for two users including the
depth image.
Figure 8: Left: Two users working on the same seismic
line. Right: According depth image.
From the above mentioned functionality we
implemented input coordination and individual tool
selection (see Figure 7). From our earlier
experiences with the test applications we expected
that technologically unaware users would not even
realize that they are tracked to ensure fluid multi-
user multi-touch interaction.
Experts from geology were using our context-
aware multi-touch system and we received very
positive feedback. Also the display size, its
impressive image quality with ultra-high resolution
and the accuracy of the multi-touch input were
highly appreciated. With the expectation of
becoming frequent users of the system our visitors
were delighted about the adaptation features of the
assembly, which easily allow changing the height
and inclination of the display. As expected, most of
these experts did not realize the implicit input
coordination to be an obvious feature. The system
just worked as expected.
5 CONCLUSIONS
We analyzed state-of-the-art methods to achieve
user awareness of multi-touch tabletop displays and
derived an improved method from our experiments.
Based on the sensor data from a depth camera we
achieved robust context tracking. We described an
implementation of this method including automatic
re-calibration, sensor-fused segmentation, separation
of users, robust hand identification based on
geodesic distances and a detection method for
involuntary touches. Based on the resulting user
awareness, we suggested interaction techniques for
fluent co-located collaboration. While the general
idea of context tracking with an overhead camera
has been proposed earlier, we contribute a detailed
description of a timely method that is robust and
easy to implement. Finally, we described a high-
fidelity system prototype and a collaborative
application for the exploration and interpretation of
seismic data.
We are looking forward to gain further insights
on the usability of the system in long-term studies
with expert users.
REFERENCES
Annett, M., Grossman, T., Wigdor, D., & Fitzmaurice, G.,
(2011). Medusa: a proximity-aware multi-touch
tabletop. Proc. UIST 2011 (pp. 337-346). New York,
NY, USA: ACM-Press.
Bradski, G., (2000). The OpenCV Library. Dr. Dobb's
Journal of Software Tools.
Burrus, N. (2011, June). Kinect RGB Demo v0.5.0.
Retrieved from http://nicolas.burrus.name/index.php/
Research/KinectRgbDemoV5
Dang, C. T., Straub, M., & André, E., (2009). Hand
distinction for multi-touch tabletop interaction. Proc.
ITS 2009 (pp. 101-108). New York, NY, USA: ACM-
Press.
Dietz, P., & Leigh, D. (2001). DiamondTouch: a multi-
user touch technology. Proc. UIST 2001 (pp. 219-
226). New York, NY, USA: ACM-Press.
Dohse, K. C., Dohse, T., Still, J. D., & Parkhurst, D. J.,
(2008). Enhancing Multi-user Interaction with Multi-
touch Tabletop Displays Using Hand Tracking. Proc.
Advances in Computer-Human Interaction 2008 (pp.
297-302). Washington, DC, USA: IEEE Computer
Society.
Ewerling, P., Kulik, A., & Froehlich, B., (2012). Finger
and hand detection for multi-touch interfaces based on
maximally stable extremal regions. Proc. ITS 2012
(pp. 173-182). New York, NY, USA: ACM-Press.
Jung, H., Nebe, K., Klompmaker, F., & Fischer, H.
(2011). Authentifizierte Eingaben auf Multitouch-
Tischen. Mensch & Computer 2011 (pp. 305-308).
München: Oldenbourg Wissenschaftsverlag GmbH.
Kaltenbrunner, M., Bovermann, T., Bencina, R., &
Costanza, E., (2005). TUIO - A Protocol for Table-
Top Tangible User Interfaces. Proc. of the 6th
International Workshop on Gesture in Human-
Computer Interaction and Simulation.
Klompmaker, F., Nebe, K., & Fast, A., (2012).
dSensingNI: a framework for advanced tangible
interaction using a depth camera. Proc. of ACM TEI
2012 (pp. 217-224). New York, NY, USA: ACM-
Press.
Marquardt, N., Kiemer, J., & Greenberg, S., (2010). What
caused that touch?: expressive interaction with a
surface through fiduciary-tagged gloves. Proc. ITS
2010 (pp. 139-142). New York, NY, USA: ACM-
Press.
Martínez, R., Collins, A., Kay, J., & Yacef, K., (2011).
Who did what? Who said that?: Collaid: an
environment for capturing traces of collaborative
learning at the tabletop. Proc. ITS 2011 (pp. 172-181).
UserAwarenessforCollaborativeMulti-touchInteraction
365