On the Segmentation and Classification of Water in Videos

Pascal Mettes, Robby T. Tan, Remco Veltkamp

2014

Abstract

The automatic recognition of water entails a wide range of applications, yet little attention has been paid to solve this specific problem. Current literature generally treats the problem as a part of more general recognition tasks, such as material recognition and dynamic texture recognition, without distinctively analyzing and characterizing the visual properties of water. The algorithm presented here introduces a hybrid descriptor based on the joint spatial and temporal local behaviour of water surfaces in videos. The temporal behaviour is quantified based on temporal brightness signals of local patches, while the spatial behaviour is characterized by Local Binary Pattern histograms. Based on the hybrid descriptor, the probability of a small region of being water is calculated using a Decision Forest. Furthermore, binary Markov Random Fields are used to segment the image frames. Experimental results on a new and publicly available water database and a subset of the DynTex database show the effectiveness of the method for discriminating water from other dynamic and static surfaces and objects.

References

  1. Beauchemin, S. and Barron, J. (1995). The computation of optical flow. ACM Computing Surveys, 27(3):433- 466.
  2. Bochkanov, S. (1999-2013). (www.alglib.net).
  3. Boykov, Y. and Kolmogorov, V. (2004). An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. PAMI.
  4. Chan, A. and Vasconcelos, N. (2008). Modeling, clustering, and segmenting video with mixtures of dynamic textures. PAMI, 30(5):909-926.
  5. Criminisi, A., Shotton, J., and Konukoglu, E. (2012). Decision forests. Foundations and Trends in Computer Graphics and Vision, 7(2):81-227.
  6. Doretto, G., Cremers, D., Favaro, P., and Soatto, S. (2003). Dynamic texture segmentation. ICCV, 2:1236-1242.
  7. Fazekas, S. and Chetverikov, D. (2007). Analysis and performance evaluation of optical flow features for dynamic texture recognition. SPIC, 22:680-691.
  8. Hu, D., Bo, L., and Ren, X. (2011). Toward robust material recognition for everyday objects. BMVC, pages 48.1- 48.11.
  9. Kontschieder, P., Kohli, P., Shotton, J., and Criminisi, A. (2013). Geof: Geodesic forests for learning coupled predictors. CVPR.
  10. Mumtaz, A., Coviello, E., Lanckriet, G., and Chan, A. (2013). Clustering dynamic textures with the hierarchical em algorithm for modeling video. PAMI, 35(7):1606-1621.
  11. Péteri, R. and Chetverikov, D. (2005). Dynamic texture recognition using normal flow and texture regularity. PRIA, 3523:223-230.
  12. Péteri, R., Fazekas, S., and Huiskes, M. (2010). Dyntex: A comprehensive database of dynamic textures. Pattern Recognition Letters, 31(12):1627-1632.
  13. Rankin, A. and Matthies, L. (2006). Daytime water detection and localization for unmanned ground vehicle autonomous navigation. Proceeding of the 25th Army Science Conference.
  14. Saisan, P., Doretto, G., Wu, Y. N., and Soatto, S. (2001). Dynamic texture recognition. CVPR, 2:II-58-II-63.
  15. Schwind, R. (1991). Polarization vision in water insects and insects living on a moist substrate. Journal of Comparative Physiology A, 169(5):531-540.
  16. Sharan, L., Liu, C., Rosenholtz, R., and Adelson, E. (2013). Recognizing materials using perceptually inspired features. IJCV, pages 1-24.
  17. Sharan, L., Rosenholtz, R., and Adelson, E. (2009). Material perception: What can you see in a brief glance? [abstract]. Journal of Vision, 9(8):784.
  18. Smith, A., Teal, M., and Voles, P. (2003). The statistical characterization of the sea for the segmentation of maritime images. Video/Image Processing and Multimedia Communications, 2:489-494.
  19. Snoek, C., Worring, M., and Smeulders, A. (2005). Early versus late fusion in semantic video analysis. In Proceedings of the 13th annual ACM international conference on Multimedia, pages 399-402. ACM.
  20. Tenenbaum, J., de Silva, V., and Langford, J. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319-2323.
  21. Varma, M. and Zisserman, A. (2005). A statistical approach to texture classification from single images. IJCV, 62(1):61-81.
  22. Varma, M. and Zisserman, A. (2009). A statistical approach to material classification using image patch exemplars. PAMI, 31(11):2032-2047.
  23. Zhao, G. and Pietikäinen, M. (2006). Local binary pattern descriptors for dynamic texture recognition. ICPR, 2:211-214.
  24. Zhao, G. and Pietikäinen, M. (2007). Dynamic texture recognition using local binary patterns with an application to facial expressions. PAMI, 29(6):915-928.
Download


Paper Citation


in Harvard Style

Mettes P., T. Tan R. and Veltkamp R. (2014). On the Segmentation and Classification of Water in Videos . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2014) ISBN 978-989-758-003-1, pages 283-292. DOI: 10.5220/0004680202830292


in Bibtex Style

@conference{visapp14,
author={Pascal Mettes and Robby T. Tan and Remco Veltkamp},
title={On the Segmentation and Classification of Water in Videos},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2014)},
year={2014},
pages={283-292},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004680202830292},
isbn={978-989-758-003-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2014)
TI - On the Segmentation and Classification of Water in Videos
SN - 978-989-758-003-1
AU - Mettes P.
AU - T. Tan R.
AU - Veltkamp R.
PY - 2014
SP - 283
EP - 292
DO - 10.5220/0004680202830292