CONTINUOUS REGION-BASED PROCESSING OF SPATIOTEMPORAL SALIENCY
Jan Tünnermann, Bärbel Mertsching
2012
Abstract
This paper describes a region-based attention approach on motion saliency, which is important for systems that perceive and interact with dynamic environments. Frames are collected to create volumes, which are sliced into stacks of spatiotemporal images. Color segmentation is applied to these images. The orientations of the resulting regions are used to calculate their prominence in a spatiotemporal context. Saliency is projected back into image space. Tests with different inputs produced results comparable with other state-of-the-art methods. We also demonstrate how top-down influence can affect the processing in order to attend objects that move in a particular direction. The model constitutes a framework for later integration of spatiotemporal and spatial saliency as independent streams, which respect different requirements in resolution and timing.
References
- Aziz, M. Z. (2009). Behavior adaptive and real-time model of integrated bottom-up and top-down visual attention. Dissertation, Universität Paderborn.
- Aziz, M. Z. and Mertsching, B. (2008a). Fast and robust generation of feature maps for region-based visual attention. In IEEE Transactions on Image Processing, volume 17, pages 633-644.
- Aziz, M. Z. and Mertsching, B. (2008b). Visual search in static and dynamic scenes using fine-grain top-down visual attention. In ICVS, volume 5008, pages 3-12.
- Belardinelli, A., Pirri, F., and Carbone, A. (2008). Motion saliency maps from spatiotemporal filtering. In WAPCV, pages 112-123.
- CAVIAR (2001). EC funded caviar project/IST 2001 37540; http://groups.inf.ed.ac.uk/vision/CAVIAR/ CAVIARDATA1/. [Online; accessed 5-September2011].
- Cui, X., Liu, Q., and Metaxas, D. (2009). Temporal spectral residual: Fast motion saliency detection. In Proc. ACM Multimedia, pages 617-620. ACM.
- Goodale, M. A. and Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15(1):20-25.
- Gorelick, L., Blank, M., Shechtman, E., Irani, M., and Basri, R. (2007). Actions as space-time shapes. In IEEE PAMI, volume 29, pages 2247-2253.
- Guo, C., Ma, Q., and Zhang, L. (2008). Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform. In IEEE CVPR, pages 1-8.
- Hou, X. and Zhang, L. (2007). Saliency detection: A spectral residual approach. In IEEE CVPR, pages 1-8.
- Itti, L. and Baldi, P. F. (2006). Bayesian surprise attracts human attention. In NIPS, pages 547-554.
- Itti, L., Koch, C., and Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. In IEEE PAMI, volume 20, pages 1254-1259.
- K. Rapantzikos S. Kollias, T. A. (2009). Spatiotemporal saliency for video classification. Signal Processing: Image Communication, 24:557-571.
- Livingstone, M. and Hubel, D. (1987). Psychophysical evidence for separate channels for the perception of form, color, movement, and depth. The Journal of Neuroscience, 7(11):3416-3468.
- Mahadevan, V. and Vasconcelos, N. (2010). Spatiotemporal saliency in dynamic scenes. In IEEE PAMI, volume 32, pages 171-177.
- Mahapatra, D., Winkler, S., and Yen, S.-C. (2008). Motion saliency outweighs other low-level features while watching videos. In SPIE, volume 6806.
- Quigley, M., Conley, K., Gerkey, B. P., Faust, J., Foote, T., Leibs, J., Wheeler, R., and Ng, A. Y. (2009). ROS: An open-source robot operating system. In ICRA Workshop on Open Source Software.
- Schüldt, C., Laptev, I., and Caputo, B. (2004). Recognizing human actions: A local svm approach. In ICPR, pages 32-36.
- Seo, H. J. and Milanfar, P. (2009). Static and space-time visual saliency detection by self-resemblance. Journal of Vision, 9(12).
- Tünnermann, J. (2010). Biologically inspired spatiotemporal saliency processing to enhance a computational attention model. Master's thesis, Universität Paderborn.
- Wischnewski, M., Belardinelli, A., Schneider, W. X., and Steil, J. J. (2010). Where to look next? Combining static and dynamic proto-objects in a TVA-based model of visual attention. Cognitive Computation, 2(4):326-343.
Paper Citation
in Harvard Style
Tünnermann J. and Mertsching B. (2012). CONTINUOUS REGION-BASED PROCESSING OF SPATIOTEMPORAL SALIENCY . In Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2012) ISBN 978-989-8565-03-7, pages 230-239. DOI: 10.5220/0003823402300239
in Bibtex Style
@conference{visapp12,
author={Jan Tünnermann and Bärbel Mertsching},
title={CONTINUOUS REGION-BASED PROCESSING OF SPATIOTEMPORAL SALIENCY},
booktitle={Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2012)},
year={2012},
pages={230-239},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003823402300239},
isbn={978-989-8565-03-7},
}
in EndNote Style
TY - CONF
JO - Proceedings of the International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2012)
TI - CONTINUOUS REGION-BASED PROCESSING OF SPATIOTEMPORAL SALIENCY
SN - 978-989-8565-03-7
AU - Tünnermann J.
AU - Mertsching B.
PY - 2012
SP - 230
EP - 239
DO - 10.5220/0003823402300239