Fast Semi-automatic Target Initialization based on Visual Saliency for Airborne Thermal Imagery

Çağlar Aytekin, Emre Tunalı, Sinan Öz

Abstract

In this study, a semi-automatic target initialization algorithm is introduced based on a recently proposed visual saliency approach. First, a center-surround difference based initial window selection is utilized around the input point coordinate provided by the user, in order to select the window which is most likely to contain the actual target and background satisfying piecewise connectivity. Then, a recently proposed visual saliency algorithm is exploited in order to detect the bounding box encapsulating the most salient part of the object. The experiments support that the saliency based tracking window initialization is capable of handling marking errors, i.e. erroneous user inputs, and boosts the performance of several tracking algorithms in terms of the number of frames in which successful tracking is achieved, when compared with several fixed window size initializations.

References

  1. Achanta, R., Hemami, S., Estrada, F., and Susstrunk, S. (2009). Frequency-tuned salient region detection. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1597-1604.
  2. Alexe, B., Deselaers, T., and Ferrari, V. (2010). What is an object? In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 73-80.
  3. Avidan, S. (2007). Ensemble tracking. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(2):261-271.
  4. Babenko, B., Yang, M.-H., and Belongie, S. (2009). Visual tracking with online multiple instance learning. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 983-990.
  5. Bibby, C. and Reid, I. (2008). Robust real-time visual tracking using pixel-wise posteriors. In Proceedings of the 10th European Conference on Computer Vision: Part II, ECCV 7808, pages 831-844, Berlin, Heidelberg. Springer-Verlag.
  6. Cheng, M., Zhang, G., Mitra, N. J., Huang, X., and Hu, S. (2011). Global contrast based salient region detection. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 7811, pages 409-416, Washington, DC, USA. IEEE Computer Society.
  7. Collins, R., Liu, Y., and Leordeanu, M. (2005a). Online selection of discriminative tracking features. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 27(10):1631-1643.
  8. Collins, R., Zhou, X., and Teh, S. K. (2005b). An open source tracking testbed and evaluation web site. In IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS 2005), January 2005.
  9. Dowson, N. D. H. and Bowden, R. (2005). Simultaneous modeling and tracking (smat) of feature sets. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2, pages 99-105.
  10. Goferman, S., Zelnik-Manor, L., and Tal, A. (2010). Context-aware saliency detection. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2376-2383.
  11. Grabner, H. and Bischof, H. (2006). On-line boosting and vision. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 1, pages 260-267.
  12. Grabner, H., Leistner, C., and Bischof, H. (2008). Semisupervised on-line boosting for robust tracking. In Proceedings of the 10th European Conference on Computer Vision: Part I, ECCV 7808, pages 234-247, Berlin, Heidelberg. Springer-Verlag.
  13. Grabner, H., Matas, J., Van Gool, L., and Cattin, P. (2010). Tracking the invisible: Learning where the object might be. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 1285- 1292.
  14. Hou, X. and Zhang, L. (2007). Saliency detection: A spectral residual approach. In Computer Vision and Pattern Recognition, 2007. CVPR 7807. IEEE Conference on, pages 1-8.
  15. Kwon, J. and Lee, K. M. (2010). Visual tracking decomposition. In CVPR, pages 1269-1276.
  16. Mahadevan, V. and Vasconcelos, N. (2011). Automatic initialization and tracking using attentional mechanisms. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on, pages 15-20.
  17. Pele, O. and Werman, M. (2010). The quadratic-chi histogram distance family. In Proceedings of the 11th European conference on Computer vision: Part II, ECCV'10, pages 749-762, Berlin, Heidelberg. Springer-Verlag.
  18. Blanc-Talon, Ramanan, D., Forsyth, D., and Zisserman, A. (2007). Tracking people by learning their appearance. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(1):65-81.
  19. Rother, C., Kolmogorov, V., and Blake, A. (2004). ”GrabCut”: interactive foreground extraction using iterated graph cuts. In ACM SIGGRAPH 2004 Papers, SIGGRAPH 7804, pages 309-314, New York, NY, USA. ACM.
  20. Sand, P. and Teller, S. (2006). Particle video: Longrange motion estimation using point trajectories. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 2195-2202.
  21. Shi, J. and Tomasi, C. (1994). Good features to track. In Computer Vision and Pattern Recognition, 1994. Proceedings CVPR 7894., 1994 IEEE Computer Society Conference on, pages 593-600.
  22. Stalder, S., Grabner, H., and Van Gool, L. (2009). Beyond semi-supervised tracking: Tracking should be as simple as detection, but not simpler than recognition. In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on, pages 1409- 1416.
  23. Toivanen, P. J. (1996). New geodesic distance transforms for gray-scale images. Pattern Recogn. Lett., 17(5):437-450.
  24. Toyama, K. and Wu, Y. (2000). Bootstrap initialization of nonparametric texture models for tracking. In Proceedings of the 6th European Conference on Computer Vision-Part II, ECCV 7800, pages 119-133, London, UK, UK. Springer-Verlag.
  25. Veeraraghavan, H., Schrater, P., and Papanikolopoulos, N. (2006). Robust target detection and tracking through integration of motion, color, and geometry. Computer Vision and Image Understanding, 103(2):121-138.
  26. Wei, Y., Wen, F., Zhu, W., and Sun, J. (2012). Geodesic saliency using background priors. In Proceedings of the 12th European conference on Computer Vision - Volume Part III, ECCV'12, pages 29-2, Berlin, Heidelberg. Springer-Verlag.
  27. Yilmaz, A., Shafique, K., and Shah, M. (2003). Target tracking in airborne forward looking infrared imagery. Image and Vision Computing, 21(7):623 - 635.
Download


Paper Citation


in Harvard Style

Aytekin Ç., Tunalı E. and Öz S. (2014). Fast Semi-automatic Target Initialization based on Visual Saliency for Airborne Thermal Imagery . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2014) ISBN 978-989-758-009-3, pages 490-497. DOI: 10.5220/0004668904900497


in Bibtex Style

@conference{visapp14,
author={Çağlar Aytekin and Emre Tunalı and Sinan Öz},
title={Fast Semi-automatic Target Initialization based on Visual Saliency for Airborne Thermal Imagery},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2014)},
year={2014},
pages={490-497},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004668904900497},
isbn={978-989-758-009-3},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2014)
TI - Fast Semi-automatic Target Initialization based on Visual Saliency for Airborne Thermal Imagery
SN - 978-989-758-009-3
AU - Aytekin Ç.
AU - Tunalı E.
AU - Öz S.
PY - 2014
SP - 490
EP - 497
DO - 10.5220/0004668904900497