Structured Edge Detection for Improved Object Localization using the Discriminative Generalized Hough Transform

Eric Gabriel, Ferdinand Hahmann, Gordon Böer, Hauke Schramm, Carsten Meyer

2016

Abstract

Automatic localization of target objects in digital images is an important task in Computer Vision. The Generalized Hough Transform (GHT) and its variant, the Discriminative Generalized Hough Transform (DGHT), are model-based object localization algorithms which determine the most likely object position based on accumulated votes in the so-called Hough space. Many automatic localization algorithms - including the GHT and the DGHT - operate on edge images, using e.g. the Canny or the Sobel Edge Detector. However, if the image contains many edges not belonging to the object of interest (e.g. from other objects, background clutter, noise etc.), these edges cause misleading votes which increase the probability of localization errors. In this paper we investigate the effect of a more sophisticated edge detection algorithm, called Structured Edge Detector, on the performance of a DGHT-based object localization approach. This method utilizes information on the shape of the target object to substantially reduce the amount of non-object edges. Combining this technique with the DGHT leads to a significant localization performance improvement for automatic pedestrian and car detection.

References

  1. Agarwal, S., Awan, A., and Roth, D. (2004). Learning to detect objects in images via a sparse, part-based representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(11):1475-1490.
  2. Agarwal, S. and Roth, D. (2002). Learning a sparse representation for object detection. In Computer Vision - ECCV 2002, pages 113-127. Springer.
  3. Andriluka, M., Roth, S., and Schiele, B. (2008). People-tracking-by-detection and people-detectionby-tracking. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8. IEEE.
  4. Arbelaez, P., Maire, M., Fowlkes, C., and Malik, J. (2011). Contour detection and hierarchical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5):898-916.
  5. Ballard, D. H. (1981). Generalizing the hough transform to detect arbitrary shapes. Pattern Recognition, 13(2):111-122.
  6. Breiman, L. (2001). Random forests. Machine Learning, 45(1):5-32.
  7. Canny, J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, (6):679-698.
  8. Chaohui, Z., Xiaohui, D., Shuoyu, X., Zheng, S., and Min, L. (2007). An improved moving object detection algorithm based on frame difference and edge detection. In Fourth International Conference on Image and Graphics, pages 519-523. IEEE.
  9. Dollár, P. and Zitnick, C. L. (2013). Structured forests for fast edge detection. In IEEE International Conference on Computer Vision, pages 1841-1848. IEEE.
  10. Dollár, P. and Zitnick, C. L. (2014). Fast edge detection using structured forests.
  11. Ecabert, O., Peters, J., Schramm, H., Lorenz, C., Von Berg, J., Walker, M. J., Vembar, M., Olszewski, M. E., Subramanyan, K., Lavi, G., et al. (2008). Automatic model-based segmentation of the heart in ct images.
  12. IEEE Transactions on Medical Imaging, 27(9):1189- 1201.
  13. Ecabert, O. and Thiran, J.-P. (2004). Adaptive hough transform for the detection of natural shapes under weak affine transformations. Pattern Recognition Letters, 25(12):1411-1419.
  14. Everingham, M., Van Gool, L., Williams, C. K., Winn, J., and Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303-338.
  15. Fei-Fei, L., Fergus, R., and Perona, P. (2006). Oneshot learning of object categories. IEEE Transactions On Pattern Analysis and Machine Intelligence, 28(4):594-611.
  16. Gavrila, D. M. (2000). Pedestrian detection from a moving vehicle. In Computer Vision - ECCV 2000 , pages 37- 49. Springer.
  17. Hahmann, F., B öer, G., Deserno, T. M., and Schramm, H. (2014). Epiphyses localization for bone age assessment using the discriminative generalized hough transform. In Bildverarbeitung für die Medizin, pages 66-71. Springer.
  18. Hahmann, F., Böer, G., Gabriel, E., Meyer, C., and Schramm, H. (2015). A shape consistency measure for improving the generalized hough transform. In 10th Int. Conf. on Computer Vision Theory and Applications, VISAPP.
  19. Hahmann, F., Ruppertshofen, H., B öer, G., Stannarius, R., and Schramm, H. (2012). Eye localization using the discriminative generalized Hough transform. Springer.
  20. Knossow, D., Van De Weijer, J., Horaud, R., and Ronfard, R. (2007). Articulated-body tracking through anisotropic edge detection. In Dynamical Vision, pages 86-99. Springer.
  21. Kontschieder, P., Rota Bul ó, S., Bischof, H., and Pelillo, M. (2011). Structured class-labels in random forests for semantic image labelling. In IEEE International Conference on Computer Vision, pages 2190-2197. IEEE.
  22. Mann, H. B. and Whitney, D. R. (1947). On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics, pages 50-60.
  23. Montesinos, P. and Magnier, B. (2010). A new perceptual edge detector in color images. In Advanced Concepts for Intelligent Vision Systems, pages 209-220. Springer.
  24. Nowozin, S. and Lampert, C. H. (2011). Structured learning and prediction in computer vision. Foundations and Trends R in Computer Graphics and Vision, 6(3- 4):185-365.
  25. Ruppertshofen, H. (2013). Automatic modeling of anatomical variability for object localization in medical images. BoD - Books on Demand.
  26. Ruppertshofen, H., Lorenz, C., Beyerlein, P., Salah, Z., Rose, G., and Schramm, H. (2010). Fully automatic model creation for object localization utilizing the generalized hough transform. In Bildverarbeitung für die Medizin, pages 281-285.
  27. Shapiro, S. S. and Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, pages 591-611.
  28. Shrivakshan, G. and Chandrasekar, C. (2012). A comparison of various edge detection techniques used in image processing. IJCSI International Journal of Computer Science Issues, 9(5):272-276.
  29. Wang, L., Shi, J., Song, G., and Shen, I.-F. (2007). Object detection combining recognition and segmentation. In Computer Vision - ACCV 2007 , pages 189- 199. Springer.
  30. Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics bulletin, pages 80-83.
  31. Wu, Y., Liu, Y., Yuan, Z., and Zheng, N. (2012). Iaircarped: A psychophysically annotated dataset with fine-grained and layered semantic labels for object recognition. Pattern Recognition Letters, 33(2):218- 226.
Download


Paper Citation


in Harvard Style

Gabriel E., Hahmann F., Böer G., Schramm H. and Meyer C. (2016). Structured Edge Detection for Improved Object Localization using the Discriminative Generalized Hough Transform . In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016) ISBN 978-989-758-175-5, pages 393-402. DOI: 10.5220/0005722803930402


in Bibtex Style

@conference{visapp16,
author={Eric Gabriel and Ferdinand Hahmann and Gordon Böer and Hauke Schramm and Carsten Meyer},
title={Structured Edge Detection for Improved Object Localization using the Discriminative Generalized Hough Transform},
booktitle={Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)},
year={2016},
pages={393-402},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005722803930402},
isbn={978-989-758-175-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)
TI - Structured Edge Detection for Improved Object Localization using the Discriminative Generalized Hough Transform
SN - 978-989-758-175-5
AU - Gabriel E.
AU - Hahmann F.
AU - Böer G.
AU - Schramm H.
AU - Meyer C.
PY - 2016
SP - 393
EP - 402
DO - 10.5220/0005722803930402