5 FINAL REMARKS
The Saliency Sandbox is still under development and
grows continuously in terms of available feature and
activation maps. Further, we will add functional-
ity for evaluation based on real gaze data. By inte-
grating algorithms for eye movement detection (e.g.
MERCY (Braunagel et al., 2016)) and scanpath anal-
ysis (e.g., SubsMatch (K
¨
ubler et al., 2014)) in dy-
namic scenes, we will creating a basis for further ex-
tensions such as the explorative search for suitable
combinations of activation and feature maps or neu-
ral networks for a non-linear combination of activa-
tion maps. Here, the focus is primarily on dynamic
scenes instead of single frames, which evokes new
challenges since there are only few gaze points per
frame available. The goal is to create a comprehen-
sive environment in order to implement new saliency
maps quickly and comparable or to combine existing
approaches to design saliency maps for a particular
use case.
Source code, binaries for Linux x64, and extensive
documentation are available at:
www.ti.uni-tuebingen.de/perception
REFERENCES
Achanta, R., Hemami, S., Estrada, F., and Susstrunk, S.
(2009). Frequency-tuned salient region detection.
In IEEE conference on Computer vision and pattern
recognition, 2009. CVPR 2009., pages 1597–1604.
Braunagel, C., Geisler, D., Stolzmann, W., Rosenstiel, W.,
and Kasneci, E. (2016). On the necessity of adap-
tive eye movement classification in conditionally au-
tomated driving scenarios. In Proceedings of the Ninth
Biennial ACM Symposium on Eye Tracking Research
& Applications, pages 19–26. ACM.
Derrington, A. M., Krauskopf, J., and Lennie, P. (1984).
Chromatic mechanisms in lateral geniculate nucleus
of macaque. The Journal of Physiology, 357(1):241–
265.
Godbehere, A. B., Matsukawa, A., and Goldberg, K.
(2012). Visual tracking of human visitors under
variable-lighting conditions for a responsive audio art
installation. In 2012 American Control Conference
(ACC), pages 4305–4312. IEEE.
Harel, J., Koch, C., and Perona, P. (2006a). Graph-based vi-
sual saliency. In Advances in neural information pro-
cessing systems, pages 545–552.
Harel, J., Koch, C., and Perona, P. (2006b). A saliency im-
plementation in matlab. URL: http://www. klab. cal-
tech. edu/˜ harel/share/gbvs. php.
Hou, X. and Zhang, L. (2007). Saliency detection: A spec-
tral residual approach. In 2007 IEEE Conference on
Computer Vision and Pattern Recognition, pages 1–8.
IEEE.
Itti, L. (2004). The iLab Neuromorphic Vision C++ Toolkit:
Free tools for the next generation of vision algorithms.
The Neuromorphic Engineer, 1(1):10.
Itti, L. and Koch, C. (2000). A saliency-based search mech-
anism for overt and covert shifts of visual attention.
Vision Research, 40(10):1489–1506.
ITU (2011). Studio encoding parameters of digital televi-
sion for standard 4:3 and wide screen 16:9 aspect ra-
tios.
KaewTraKulPong, P. and Bowden, R. (2002). An im-
proved adaptive background mixture model for real-
time tracking with shadow detection. In Video-based
surveillance systems, pages 135–144. Springer.
Kasneci, E., Kasneci, G., K
¨
ubler, T. C., and Rosenstiel, W.
(2015). Online recognition of fixations, saccades, and
smooth pursuits for automated analysis of traffic haz-
ard perception. In Artificial Neural Networks, pages
411–434. Springer.
K
¨
ubler, T. C., Kasneci, E., and Rosenstiel, W. (2014). Sub-
smatch: Scanpath similarity in dynamic scenes based
on subsequence frequencies. In Proceedings of the
Symposium on Eye Tracking Research and Applica-
tions, pages 319–322. ACM.
Patrone, A. R., Valuch, C., Ansorge, U., and Scherzer,
O. (2016). Dynamical optical flow of saliency
maps for predicting visual attention. arXiv preprint
arXiv:1606.07324.
Schauerte, B. and Stiefelhagen, R. (2012). Quaternion-
based spectral saliency detection for eye fixation pre-
diction. In Computer Vision–ECCV 2012, pages 116–
129. Springer.
Tafaj, E., Kasneci, G., Rosenstiel, W., and Bogdan, M.
(2012). Bayesian online clustering of eye movement
data. In Proceedings of the Symposium on Eye Track-
ing Research and Applications, ETRA ’12, pages
285–288. ACM.
Treisman, A. M. and Gelade, G. (1980). A feature-
integration theory of attention. Cognitive psychology,
12(1):97–136.
Von Goethe, J. W. (1840). Theory of colours, volume 3.
MIT Press.
Walther, D. and Koch, C. (2006). Modeling attention to
salient proto-objects. Neural networks, 19(9):1395–
1407.
Zhang, J. and Sclaroff, S. (2013). Saliency detection: A
boolean map approach. In Proceedings of the IEEE
International Conference on Computer Vision, pages
153–160.
Zivkovic, Z. (2004). Improved adaptive gaussian mixture
model for background subtraction. In Pattern Recog-
nition, 2004. ICPR 2004. Proceedings of the 17th In-
ternational Conference on, volume 2, pages 28–31.
VISAPP 2017 - International Conference on Computer Vision Theory and Applications
664