Foreground Segmentation for Moving Cameras under Low Illumination Conditions

Wei Wang, Weili Li, Xiaoqing Yin, Yu Liu, Maojun Zhang

2016

Abstract

A foreground segmentation method, including image enhancement, trajectory classification and object segmentation, is proposed for moving cameras under low illumination conditions. Gradient-field-based image enhancement is designed to enhance low-contrast images. On the basis of the dense point trajectories obtained in long frames sequences, a simple and effective clustering algorithm is designed to classify foreground and background trajectories. By combining trajectory points and a marker-controlled watershed algorithm, a new type of foreground labeling algorithm is proposed to effectively reduce computing costs and improve edge-preserving performance. Experimental results demonstrate the promising performance of the proposed approach compared with other competing methods.

References

  1. Brox, T. and Malik, J. (2010). Object segmentation by long term analysis of point trajectories. In Proceedings of the 11th European Conference on Computer Vision (ECCV). Springer Berlin Heidelberg.
  2. Elqursh, A. and Elgammal, A. (2012). Online moving camera background subtraction. In Proceedings of the 12th European Conference on Computer Vision (ECCV). Springer Berlin Heidelberg.
  3. Gauch, J. (1999). Image segmentation and analysis via multiscale gradient watershed hierarchies. IEEE Transactions on Image Processing (TIP), pages 69-79.
  4. Jeong, Y., Lim, C., Jeong, B., and Choi, H. (2013). Topic masks for image segmentation. KSII Transactions on Internet and Information Systems (TIIS), pages 3274- 3292.
  5. Jiang, Y., Dai, Q., Xue, X., Liu, W., and Ngo, C. (2012). Trajectory-based modeling of human actions with motion reference point. In Proceedings of the 12th IEEE European Conference on Computer Vision (ECCV). Firenze, Italy.
  6. Lezama, J., Alahari, K., Sivic, J., and Lapte, I. (2011). Track to the future: Spatio-temporal video segmentation with long-range motion cues. In Proceedings of the 24th IEEE International Conference on Computer Vision and Pattern Recognition (CVPR). Colorado Springs.
  7. Liu, Y., Xiao, H., Wang, W., and Zhang, M. (2015a). A robust motion detection algorithm on noisy videos. In IEEE 40th International Conference on Acoustics, Speech and Signal Processing (ICASSP). Brisbane, Australia.
  8. Liu, Y., Xiao, H., Xu, W., M.Zhang, and J.Zhang (2015b). Data separation of l1-minimization for real-time motion detection. In IEEE British 26th International Conference on Machine Vision (BMVC). Swansea, UK.
  9. Nonaka, Y., Shimada, A., Nagahara, H., and Taniguchi, R. (2013). Real-time foreground segmentation from moving camera based on casebased trajectory classication. In IEEE International 2nd Asian Conference Pattern Recognition (ACPR). Okinawa, Japan.
  10. Ochs, P. and Brox, T. (2011). Object segmentation in video: A hierarchical variational approach for turning point trajectories into dense regions. In Proceedings of the 13th IEEE International Conference Computer Vision (ICCV). Barcelona, Spain.
  11. Ochs, P. and Brox, T. (2012). Higher order motion models and spectral clustering. In Proceedings of the 25th IEEE International Conference on Computer Vision and Pattern Recognition (CVPR). Rhode Island.
  12. Sheikh, Y., Javed, O., and T.Kanade (2009). Background subtraction for freely moving cameras. In IEEE 12th International Conference Computer Vision (ICCV). Kyoto, Japan.
  13. Soille, P. (1999). Morphological image analysis principles and applications. Springer-verlag, Berlin, Germany.
  14. Varadaraja, S., Hongbin, W., Miller, P., and Huiyu, Z. (2015a). Fast convergence of regularised region-based mixture of gaussians for dynamic background modelling. Computer Vision and Image Understanding, pages 45-58.
  15. Varadaraja, S., Miller, P., and Huiyu, Z. (2015b). Regionbased mixture of gaussians modelling for foreground detection in dynamic scenes. Pattern Recognition (PR), pages 3488-3503.
  16. Yin, X., Wang, B., Li, W., Liu, Y., and Zhang, M. (2015). Background subtraction for moving cameras based on trajectory classification, image egmentation and label inference. KSII Transactions on Internet and Information Systems (TIIS).
  17. Zhang, G., Yuan, Z., Chen, D., Liu, Y., and Zheng, N. (2012). Video object segmentation by clustering region trajectories. In Proceedings of the 25th IEEE International Conference on Pattern Recognition (CVPR). Rhode Island.
  18. Zhou, J., Gao, S., and Jin, Z. (2012). A new connected coherence tree algorithm for image segmentation. KSII Transactions on Internet and Information Systems (TIIS), pages 547-565.
  19. Zhu, L., Wang, P., and Xia, D. (2007). Image contrast enhancement by gradient field equalization. Journal of Computer-Aided Design and Computer Graphics, page 1546.
Download


Paper Citation


in Harvard Style

Wang W., Li W., Yin X., Liu Y. and Zhang M. (2016). Foreground Segmentation for Moving Cameras under Low Illumination Conditions . In Proceedings of the 5th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM, ISBN 978-989-758-173-1, pages 65-71. DOI: 10.5220/0005695100650071


in Bibtex Style

@conference{icpram16,
author={Wei Wang and Weili Li and Xiaoqing Yin and Yu Liu and Maojun Zhang},
title={Foreground Segmentation for Moving Cameras under Low Illumination Conditions},
booktitle={Proceedings of the 5th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM,},
year={2016},
pages={65-71},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005695100650071},
isbn={978-989-758-173-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 5th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM,
TI - Foreground Segmentation for Moving Cameras under Low Illumination Conditions
SN - 978-989-758-173-1
AU - Wang W.
AU - Li W.
AU - Yin X.
AU - Liu Y.
AU - Zhang M.
PY - 2016
SP - 65
EP - 71
DO - 10.5220/0005695100650071