approaches: 1) a machine learning based approach
where scene consistent illumination is estimated and
applied to augmentations, as championed for exam-
ple in (LeGendre et al., 2019), or 2) model-based ap-
proaches where some sort of parametric illumination
model is tuned to the scene based on extraction of var-
ious properties from the image stream, e.g., (Bertolini
and Madsen, 2020).
For outdoor daytime AR we propose to employ an
adaptive daylight model; a daylight model that uses
the time, data, compass reading and geo-location to
compute the direction vector to the sun. The local
scene model would be used to classify which areas
of the scene should be in shadow (if the sun is even
actually shining). The adaptive part of the daylight
models should be that it is tweaked to adapt to the ac-
tual conditions in terms of the weather; e.g., whether
is is a clear blue sky day, partly overcast, completely
overcast, or rainy. Or even if there is snow. This adap-
tation we believe is possible through machine learn-
ing approaches based on monitoring the video feed on
the device, and there are already examples of work in
this area, e.g., (Lu et al., 2017).
The actual estimation of the sun and sky radi-
ances we would in the direction of fusing existing
shadow based approaches with an inverse render-
ing inspired approach comparing the current appear-
ance of surfaces in the local scene model with their
appearance as stored in the cloud model (Google
Earth). We might want to look into doing laser range
finder based capture of huge point clouds for the
streets and buildings, while simultaneously capturing
the corresponding illumination conditions with omni-
directional HDR cameras. This would enable com-
putation of surface reflectances. These stored models
and reflectances could then, at run-time on the hand-
held device make it realistic to estimate the illumina-
tion conditions at that particular time. The viability of
such an approach was tentatively demonstrated in for
example (Jensen et al., 2006).
We believe this combination of 1) streamed, pre-
viously acquired, static models, and 2) run-time ac-
quired additional geometry and illumination estima-
tion, offers a realistic promise of easy to use, real-
time handheld AR which can run on off-the-shelf cur-
rent smartphones. For outdoor scenarios, that is. In-
door scenarios are still much more complicated from
an illumination point of view. The only comfort we
have is that initial perceptual experiments are indicat-
ing that human tolerance to imperfections in illumina-
tion correctness is higher for indoor scenarios. Proba-
bly because it is more difficult to judge what actually
looks correct, as long as the rendered augmentations
are largely consistent with the real scene.
5 CONCLUSIONS
In this paper we have attempted to give an overview
of the primary challenges involved in developing real-
istic AR on handheld devices, which can dynamically
adapt the changing illumination conditions.
We fundamentally believe a lot more work is re-
quired on perceptual studies into how tolerant hu-
mans are to various aspects of imperfections in vi-
sual quality of AR. That said, we have proposed what
we believe to be the best path for future research. A
path that involves mixing geometry capture on the
device using Structure-from-Motion techniques with
streamed, pre-captured gross models of the environ-
ment. Dynamically adaptive illumination estimation
would then be based on inverse rendering techniques
by comparing real-time scene appearance with stored
scene reflectances combined with a parametric day-
light model.
One day in the future it will be possible to hunt vi-
sually convincing augmented dinosaurs in the streets,-
that’s the dream!
ACKNOWLEDGEMENTS
This work was partially funded by the LER project no.
EUDP 2015-I under the Danish national EUDP pro-
gramme, and partially by the DARWIN project under
the Innovation Fund Denmark, case number: 6151-
00020B. This funding is gratefully acknowledged.
The author would also like to take this opportunity
to thank colleagues and students, past and present, for
inspiration.
REFERENCES
ARCore (2020). Using arcore to light models in a scene.
https://developers.google.com/ar/develop/unity/light-
estimation. Accessed: January 7th, 2020.
Azuma, R. T., Baillot, Y., Behringer, R., Feiner, S., Julier,
S., and MacIntyre, B. (2001). Recent advances in
augmented reality. IEEE Transactions on Computer
Graphics and Applications, 21(6):34 – 47.
Barreira, J., Bessa, M., Barbosa, L., and Magalhaes, L.
(2018). A context-aware method for authentically
simulating outdoors shadows for mobile augmented
reality. IEEE Transactions on Visualization and Com-
puter Graphics, 24(3):1223–1231.
Bertolini, F. and Madsen, C. B. (2020). Real time outdoor
light estimation for mobile augmented reality. In Pro-
ceedings: International Conference on Graphics The-
ory and Applications. Accepted.
GRAPP 2020 - 15th International Conference on Computer Graphics Theory and Applications
382