Dense Segmentation of Textured Fruits in Video Sequences

Waqar S. Qureshi, Shin'ichi Satoh, Matthew N. Dailey, Mongkol Ekpanyapong

2014

Abstract

Autonomous monitoring of fruit crops based on mobile camera sensors requires methods to segment fruit regions from the background in images. Previous methods based on color and shape cues have been successful in some cases, but the detection of textured green fruits among green plant material remains a challenging problem. A recently proposed method uses sparse keypoint detection, keypoint descriptor computation, and keypoint descriptor classification followed by morphological techniques to fill the gaps between positively classified keypoints. We propose a textured fruit segmentation method based on super-pixel oversegmentation, dense SIFT descriptors, and and bag-of-visual-word histogram classification within each super-pixel. An empirical evaluation of the proposed technique for textured fruit segmentation yields 96.67% detection rate, a per-pixel accuracy of 97.657%, and a per frame false alarm rate of 0.645%, compared to a detection rate of 90.0%, accuracy of 84.94%, and false alarm rate of 0.887% for the baseline sparse keypoint-based method. We conclude that super-pixel oversegmentation, dense SIFT descriptors, and bag-of-visual-word histogram classification are effective for in-field segmentation of textured green fruits from the background..

References

  1. Chaivivatrakul, S., Moonrinta, J., and Dailey, M. N. (2010). Towards automated crop yield estimation: Detection and 3D reconstruction of pineapples in video sequences. In International Conference on Computer Vision Theory and Applications.
  2. Dey, D., Mummert, L., and Sukthankar, R. (2012). Classification of plant structures from uncalibrated image sequences. In IEEE Winter Conference on Applications and Computer Vision, pages 329-336.
  3. Diago, M.-P., Correa, C., Millán, B., Barreiro, P., Valero, C., and Tardaguila, J. (2012). Grapevine yield and leaf area estimation using supervised classification methodology on rgb images taken under field conditions. Sensors, 12(12):16988-17006.
  4. Fulkerson, B., Vedaldi, A., and Soatto, S. (2009). Class segmentation and object localization with superpixel neighborhoods. In International Conference on Computer Vision (ICCV), pages 670-677.
  5. Lhuillier, M. and Quan, L. (2005). A quasi-dense approach to surface reconstruction from uncalibrated images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3):418-433.
  6. Moonrinta, J., Chaivivatrakul, S., Dailey, M. N., and Ekpanyapong, M. (2010). Fruit detection, tracking, and 3d reconstruction for crop mapping and yield estimation. In IEEE International Conference on Control, Automation, Robotics and Vision.
  7. Pollefeys, M., Van Gool, L., Vergauwen, M., Verbiest, F., Cornelis, K., Tops, J., and Koch, R. (2004). Visual modeling with a hand-held camera. International Journal of Computer Vision, 59(3):207-232.
  8. Roy, A., Banerjee, S., Roy, D., and Mukhopadhyay, A. (2011). Statistical video tracking of pomegranate fruits. In National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics, pages 227-230.
  9. Schillaci, G., Pennisi, A., Franco, F., and Longo, D. (2012). Detecting tomato crops in greenhouses using a vision based method. In International Conference on Safety, Health and Welfare in Agriculture and Agro.
  10. Sengupta, S. and Lee, W. S. (2012). Identification and determination of the number of green citrus fruit under different ambient light conditions. In International Conference of Agricultural Engineering.
  11. Vanetti, M. (2010). Voc dataset manager software.
  12. Vedaldi, A. and Fulkerson, B. (2008). VLFeat: An open and portable library of computer vision algorithms. urlhttp://www.vlfeat.org/.
  13. Vedaldi, A. and Soatto, S. (2008). Quick shift and kernel methods for mode seeking. In European Conference on Computer Vision (ECCV).
Download


Paper Citation


in Harvard Style

Qureshi W., Satoh S., Dailey M. and Ekpanyapong M. (2014). Dense Segmentation of Textured Fruits in Video Sequences . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014) ISBN 978-989-758-004-8, pages 441-447. DOI: 10.5220/0004689304410447


in Bibtex Style

@conference{visapp14,
author={Waqar S. Qureshi and Shin'ichi Satoh and Matthew N. Dailey and Mongkol Ekpanyapong},
title={Dense Segmentation of Textured Fruits in Video Sequences},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014)},
year={2014},
pages={441-447},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004689304410447},
isbn={978-989-758-004-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014)
TI - Dense Segmentation of Textured Fruits in Video Sequences
SN - 978-989-758-004-8
AU - Qureshi W.
AU - Satoh S.
AU - Dailey M.
AU - Ekpanyapong M.
PY - 2014
SP - 441
EP - 447
DO - 10.5220/0004689304410447