itations that our AR displays inherit. One is that the
camera is typically placed at the corner and not at the
center of the display, which translates to more miss-
ing pixels for the planar geometry transparent AR dis-
play when the scene geometry is close (Fig. 1h). A
second limitation is the latency between frame cap-
ture and display. Whereas this is acceptable when the
display is used as a camera viewfinder, the latency is
problematic in AR where it causes annotations to drift
as the display moves. In our specific context, latency
makes the transparency accuracy lag when the display
moves abruptly (see driving sequence in video ac-
companying our paper). Future work could examine
leveraging motion prediction or low latency sensors
such as the display’s accelerometers to try to alleviate
this latency, but the more robust solution is likely to
require that the latency be eliminated by phone and
tablet manufacturers.
In its current implementation, the user is asked to
choose between the two modes: planar proxy or dis-
tant geometry. For the planar proxy mode, the system
assumes that it sees only plane. Future work could
look into actively searching for planes and for switch-
ing between the two modes on the fly, as needed,
without user intervention. Our display transparency
works best when the user head position is indeed at
the default distance above the center of the display.
Holding this position reasonably well is possible, as
shown by the video and by the results of our study.
Future work could examine giving the user cues about
their head position, which is easier to do than full head
tracking.
Another limitation that our approach inherits from
the conventional AR display interface is the lack of
depth cues. Whereas with a truly transparent dis-
play the scene is seen stereoscopically, and therefore
it appears at the correct depth, the transparent display
provides a monoscopic view of the scene, at a fixed,
nearby distance. Even though one has to change focus
from the nearby display to the scene, our study par-
ticipants have been able to extrapolate the direction
of the annotation to map it to the scene more accu-
rately when using our transparent display, compared
to when using the conventional AR display, which is
also void of depth cues.
Our approach for improving AR display trans-
parency has the advantage of working on any phone
or tablet with a back-facing camera, without requiring
depth acquisition or user tracking capabilities. Fur-
thermore, the computational cost of the homography
is low, so our approach is compatible even with low
end devices. Our approach brings an infrastructure-
level contribution that is ready to be integrated in vir-
tually all AR applications, where it promises the ben-
efit of improved directional guidance.
ACKNOWLEDGEMENTS
We thank our participants for their essential role in
validating our work. We thank Andreas Schuker and
Susanne Goedicke for all their support. We thank the
anonymous reviewers for their help with improving
this manuscript.
REFERENCES
Andersen, D., Popescu, V., Lin, C., Cabrera, M. E., Shang-
havi, A., and Wachs, J. (2016). A hand-held, self-
contained simulated transparent display. In 2016 IEEE
International Symposium on Mixed and Augmented
Reality (ISMAR-Adjunct), pages 96–101.
Bari
ˇ
cevi
´
c, D., H
¨
ollerer, T., Sen, P., and Turk, M. (2017).
User-perspective ar magic lens from gradient-based
ibr and semi-dense stereo. IEEE Transactions on Visu-
alization and Computer Graphics, 23(7):1838–1851.
Bari
ˇ
cevi
´
c, D., Lee, C., Turk, M., H
¨
ollerer, T., and Bow-
man, D. A. (2012). A hand-held ar magic lens with
user-perspective rendering. In 2012 IEEE Interna-
tional Symposium on Mixed and Augmented Reality
(ISMAR), pages 197–206.
Borsoi, R. A. and Costa, G. H. (2018). On the perfor-
mance and implementation of parallax free video see-
through displays. IEEE Transactions on Visualization
and Computer Graphics, 24(6):2011–2022.
Cohen, J. (1988). Statistical power analysis for the behav-
ioral sciences. Hillsdale, N.J. : L. Erlbaum Associates.
Grubert, J., Seichter, H., and Schmalstieg, D. (2014).
[poster] towards user perspective augmented reality
for public displays. In 2014 IEEE International Sym-
posium on Mixed and Augmented Reality (ISMAR),
pages 267–268.
Hill, A., Schiefer, J., Wilson, J., Davidson, B., Gandy, M.,
and MacIntyre, B. (2011). Virtual transparency: In-
troducing parallax view into video see-through ar. In
2011 10th IEEE International Symposium on Mixed
and Augmented Reality, pages 239–240.
Matsuda, Y., Shibata, F., Kimura, A., and Tamura, H.
(2013). Poster: Creating a user-specific perspec-
tive view for mobile mixed reality systems on smart-
phones. In 2013 IEEE Symposium on 3D User Inter-
faces (3DUI), pages 157–158.
Mohr, P., Tatzgern, M., Grubert, J., Schmalstieg, D., and
Kalkofen, D. (2017). Adaptive user perspective ren-
dering for handheld augmented reality. In 2017 IEEE
Symposium on 3D User Interfaces (3DUI), pages
176–181.
Pucihar, K., Coulton, P., and Alexander, J. (2013). Eval-
uating dual-view perceptual issues in handheld aug-
mented reality: Device vs. user perspective rendering.
pages 381–388.
Improved Directional Guidance with Transparent AR Displays
37