Motion Compensated Temporal Image Signature Approach

Haroon Qureshi, Markus Ludwig

Abstract

Detecting salient regions in a temporal domain is indeed a challenging problem. The problem gets trickier when there is a moving object in a scene and becomes even more complex in the presence of camera motion. The camera motion can influence saliency detection as on one side, it can provide important information about the location of moving object. On the other side, camera motion can also lead to wrong estimation of salient regions. Therefore it is very important to handle this issue more sensible. This paper provides a solution to this issue by combining a saliency detection approach with motion estimation approach. This further extends the Temporal Image Signature (TIS) (Qureshi, 2013) approach to the more complex level where not only object motion is considered but also camera motion influence is compensated.

References

  1. Abdollahian, G. and Edward J, D. (2007). Finding regions of interest in home videos based on camera motion. In IEEE International Conference on Image Processing (ICIP), volume 4.
  2. Achanta, R., Hemami, S. S., Estrada, F. J., and S üsstrunk, S. (2009). Frequency-tuned salient region detection. In CVPR, pages 1597-1604. IEEE.
  3. Achanta, R. and S üsstrunk, S. (2009). Saliency detection for content-aware image resizing. In IEEE Intl. Conf. on Image Processing.
  4. Ali Borji, L. I. (2013). State-of-the-art in visual attention modeling. In IEEE transactions on Pattern Analysis and Machine Intelligence, volume 35, pages 185-207.
  5. Borji. A, Tavakoli. H, S. D. and Itti, L. (2013). Analysis of scores, datasets, and models in visual saliency prediction. In Proceedings of the IEEE International Conference on Computer Vision, pages 921-928.
  6. C. Chamaret, J. C. C. and Meur, O. L. (2010). Spatiotemporal combination of saliency maps and eyetracking assessment of different strategies. In Proc. IEEE Int. Conf. Image Process, pages 1077-1080.
  7. Chen, Y.-M. and Bajic, I. V. (2010). Motion vector outlier rejection cascade for global motion estimation. IEEE Signal Process. Lett, 17(2):197-200.
  8. Cheng, M.-M., Zhang, G.-X., Mitra, N. J., Huang, X., and Hu, S.-M. (2011). Global contrast based salient region detection. In CVPR, pages 409-416.
  9. Cui, X., Liu, Q., and Metaxas, D. (2009). Temporal spectral residual: fast motion saliency detection. In Proceedings of the 17th ACM international conference on Multimedia, MM 7809, pages 617-620, New York, NY, USA. ACM.
  10. Deigmoeller, J. (2010). Intelligent image cropping and scaling. In PhD thesis, Brunel University.
  11. Guo, C., Ma, Q., and Zhang, L. (2008). Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform. In CVPR'08.
  12. H. Hadizadeh, M. J. Enriquez, I. V. B. (2012). Eye-tracking database for a set of standard video sequences. IEEE Trans. on Image Processing, 21(2):898-903.
  13. Hadi Hadizadeh, I. V. B. (2014). Saliency-aware video compression. IEEE Trans. on Image Processing, 23(1):19-33.
  14. Han, J., Ngan, K. N., Li, M., and Zhang, H. (2006). Unsupervised extraction of visual attention objects in color images. IEEE Trans. Circuits Syst. Video Techn., 16(1):141-145.
  15. Hou, X., Harel, J., and Koch, C. (2012). Image signature: Highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell., 34(1):194-201.
  16. Hou, X. and Zhang, L. (2007). Saliency detection: A spectral residual approach. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR07). IEEE Computer Society, pages 1-8.
  17. Itti, L. and Koch, C. (2001). Computational modelling of visual attention. Nature Review Neuroscience, 2(3):194-203.
  18. Itti, L., Koch, C., and Niebur, E. (1998). A Model of Saliency-Based Visual Attention for Rapid Scene Analysis. IEEE Trans. Pattern Anal. Mach. Intell., 20(11):1254-1259.
  19. M. Cerf, E. P. F. and Koch, C. (2009). Faces and text attract gaze independent of the task: Experimental data and computer model. In Journal of vision, volume 9.
  20. Qureshi, H. (2013). Dct based temporal image signature approach. Proceedings of the 8th International Conference on Computer Vision Theory and Applications (VISAPP 7813), 1:208-212.
  21. Qureshi, H. and Ludwig, M. (2013). Improving temporal image signature approach by adding face conspicuity map. Proceedings of the 2nd ROMEO Workshop.
  22. Schauerte, B. and Stiefelhagen, R. (2012). Predicting human gaze using quaternion dct image signature saliency and face detection. In Proceedings of the IEEE Workshop on the Applications of Computer Vision (WACV). IEEE.
  23. Treisman, A. (1986). Features and objects in visual processing. Sci. Am., 255(5):114-125.
  24. Treisman, A. M. and Gelade, G. (1980). A featureintegration theory of attention. Cognitive Psychology, 12:97-136.
  25. Y.-F. Ma, X.-S. Hua, L. L. and Zhang, H.-J. (2005). A generic framework of user attention model and its application in video summarization. In Trans. Multi, volume 7, pages 907-919.
Download


Paper Citation


in Harvard Style

Qureshi H. and Ludwig M. (2015). Motion Compensated Temporal Image Signature Approach . In Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2015) ISBN 978-989-758-089-5, pages 512-516. DOI: 10.5220/0005303305120516


in Bibtex Style

@conference{visapp15,
author={Haroon Qureshi and Markus Ludwig},
title={Motion Compensated Temporal Image Signature Approach},
booktitle={Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2015)},
year={2015},
pages={512-516},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005303305120516},
isbn={978-989-758-089-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2015)
TI - Motion Compensated Temporal Image Signature Approach
SN - 978-989-758-089-5
AU - Qureshi H.
AU - Ludwig M.
PY - 2015
SP - 512
EP - 516
DO - 10.5220/0005303305120516