8 CONCLUSION
We have presented MAUI, a collaborative platform
that lets a worker wearing an AR headset call a re-
mote expert to help with operating a cyber-physical
system. MAUI combines spatial AR tele-presence
through shared audio/video with shared control of a
web-based user interface displayed in the AR headset.
The expert can take full control over the UI, relieving
the worker of handling the digital interface and let-
ting the worker concentrate on the physical interface
instead.
We performed a quantitative user study in order
to compare both interaction methods in terms of time
and comfort benefits. The results show that expert
help was overwhelmingly preferred by participants
over working alone. Our user study results show that
support from an expert can reduce cognitive load and
increase performance. In particular, the time until the
first physical action can be performed is decreased,
allowing a quick response in critical real-world sce-
narios (Huang et al., 2013; Huang and Alem, 2011).
Furthermore, two developers touching the pre-
sented work for the first time were able to come up
with meaningful results in a short time-frame of up to
three hours. One improved the layout of the applica-
tion towards more screen-estate and one improved the
usability especially in AR (HMD).
A future study could also compare pure au-
dio/video help with remote UI help, but this study
would involve more complicated aspects. A study
comparing different feature sets of remote help for
AR would also include whether the remote expert
can observe all relevant physical activities around the
expert, if the expert can control IoT objects in the
worker’s environment, and so on.
In the future, we plan to extend our tests to real
workers in a production environment and improve the
aesthetics of the user interface to fit modern design
standards
8
. Long-term evaluations will show the ef-
fectiveness on educating workers on problem solv-
ing. We plan to improve the widget placement system
over the standard solution
9
to reliably avoid situations
where the UI blocks the worker’s view of the physical
objects. Moreover, we plan a user interface manage-
ment system for delivering tailored user interfaces to
workers based on a formal task description.
8
Design Principles
9
Microsoft Mixed Reality Toolkit: Tag-along
ACKNOWLEDGEMENTS
The authors wish to thank Denis Kalkofen. This work
was supported by FFG grant 859208.
REFERENCES
Alce, G., Roszko, M., Edlund, H., Olsson, S., Svedberg,
J., and Wallerg
˚
ard, M. (2017). [poster] ar as a user
interface for the internet of things - comparing three
interaction models. In ISMAR-adj., pages 81–86.
Barakonyi, I., Fahmy, T., and Schmalstieg, D. (2004). Re-
mote collaboration using augmented reality videocon-
ferencing. In Proc. of Graphics Interface (GI), pages
89–96. Canadian Human-Computer Comm. Society.
Bauer, M., Kortuem, G., and Segall, Z. (1999). ”where are
you pointing at?” a study of remote collab. in a wear-
able videoconf. system. In ISWC, pages 151–158.
Baumeister, J., Ssin, S. Y., ElSayed, N. A. M., Dorrian, J.,
Webb, D. P., Walsh, J. A., Simon, T. M., Irlitti, A.,
Smith, R. T., Kohler, M., and Thomas, B. H. (2017).
Cognitive cost of using augmented reality displays.
TVCG, 23(11):2378–2388.
Chastine, J. W., Nagel, K., Zhu, Y., and Hudachek-Buswell,
M. (2008). Studies on the effectiveness of virtual
pointers in collaborative augmented reality. 3DUI,
pages 117–124.
Chen, S., Chen, M., Kunz, A., Yantac¸, A. E., Bergmark, M.,
Sundin, A., and Fjeld, M. (2013). Semarbeta: mobile
sketch-gesture-video remote support for car drivers. In
Augmented Human International Conference (AH).
Ens, B., Hincapi
´
e-Ramos, J. D., and Irani, P. (2014). Ethe-
real planes: A design framework for 2d information
spaces in 3d mixed reality environm. In SUI. ACM.
Feiner, S., MacIntyre, B., Haupt, M., and Solomon, E.
(1993). Windows on the world: 2d windows for 3d
augmented reality. In UIST, pages 145–155. ACM.
Funk, M., B
¨
achler, A., B
¨
achler, L., Kosch, T., Heidenreich,
T., and Schmidt, A. (2017). Working with ar?: A long-
term analysis of in-situ instructions at the assembly
workplace. In PETRA, pages 222–229. ACM.
Funk, M., Kosch, T., and Schmidt, A. (2016). Interactive
worker assistance: Comparing the effects of in-situ
projection, head-mounted displays, tablet, and paper
instructions. In UBICOMP, pages 934–939. ACM.
Fussell, S. R., Setlock, L. D., Yang, J., Ou, J., Mauer, E.,
and Kramer, A. D. I. (2004). Gestures over video
streams to support remote collaboration on physical
tasks. Hum.-Comput. Interact., 19(3):273–309.
Gauglitz, S., Nuernberger, B., Turk, M., and H
¨
ollerer, T.
(2014a). In touch with the remote world: Remote col-
laboration with augmented reality drawings and vir-
tual navigation. In VRST, pages 197–205. ACM.
Gauglitz, S., Nuernberger, B., Turk, M., and H
¨
ollerer, T.
(2014b). World-stabilized annotations and virtual
scene navigation for remote collaboration. In UIST,
pages 449–459, New York, NY, USA. ACM.
MAUI: Tele-assistance for Maintenance of Cyber-physical Systems
811