3D Object Recognition based on the Reference Point Ensemble

Toshiaki Ejima, Shuichi Enokida, Hisashi Ideguchi, Tomoyuki Horiuchi, Toshiyuki Kouno


In the present paper, we have proposed a high-performance 3D recognition method based on the reference point ensemble, which is a natural extension of the generalized Hough transform. The reference point ensemble consists of several reference points, each of which is color-coded by green or red, where the red reference points are used to verify the hypothesis, and the green reference points are used for Hough voting. The configuration of the reference points in the reference point ensemble is designed depending on the model shape. In the proposed method, a set of reference point ensembles is generated by the local features of a given 3D scene. Each generated reference point ensemble is a hypothetical 3D pose of a given object in the scene. Hypotheses passing through the verification by the red reference points are used for Hough voting. Hough voting is performed independently in each green point space, which reduces the voting space to three dimensions. Although a six-dimensional voting space is generally needed for 3D recognition, in the proposed method, the six-dimensional voting space is decomposed into a few three-dimensional spaces. This decomposition and the verification using green or red reference points have been demonstrated experimentally to be effective for 3D recognition. In other words, the effective recognition has been achieved by skillfully switching the following two different modes. (A) Individual mode: Voting of the hypothesis independently in each green Hough space and verifying of hypothesis with red reference points are done in this mode. (B) Ensemble mode : Verifying of registration into PHL(promising hypothesis list) and aggregating of total votes are done in this mode. This mode switching mechanism is the most significant characteristic of the proposed method.


  1. Ballard, D. H. (1981). Generalizing the hough transform to detect arbitrary shapes. In Pattern Recognition, 13(2) pages 111-122.
  2. Besl, P. J. and Mckay, N. D. (1992). A method for registration of 3-d shapes. In IEEE Trans. on Pattern Analysis and Machine Intelligence(Los Alamitos, CA, USA: IEEE Computer Society) 14 (2) : 239-256.
  3. Chua, C. S. and Jarvis, R. (1997). Point signatures: A new representation for 3d object recognition. In International Journal of Computer Vision, 25(1):63-85.
  4. Drost, B., Ulrich, M., Navab, N., and Ilic, S. (2010). model globally, match locally: efficient and robust 3d object recognition. In Proc. IEEE Computer Vision and Pattern Recognition(CVPR), pp.998-1005.
  5. Johnson, A. E. and Hebert, M. (1999). Using spin images for efficient object recognition in cluttered 3d scenes. In Trans. IEEE Pattern Analysis and Machine Intelligence(PAMI), vol. 21, no. 5, pp.433-449.
  6. Kim, E. and Medioni, G. (2011). 3d object recognition in range images using visibility context. In IEEE/RSJ International Coference on Intelligent Robots and Systems(IROS), pages 3800-3807.
  7. Mian, A., Bennamoun, M., and Owens, R. (2010). On the repeatability and quality of keypoints for local featurebased 3d object retrieval from cluttered scenes. In International Journal of Computer Vision, Volume 89 Issue 2-3.
  8. Mian, A. S., Bennamoun, M., and Owens, R. (2006). Threedimensional model-based object recognition and segmentation in cluttered scenes. In IEEE transactions on pattern analysis and machine intelligence, 28(10):1584-1601.
  9. Rabbani, T. and Heuvel, F. V. D. (2005). Efficient hough transform for automatic detection of cylinders in point clouds. In In Proceedings of the 11th Annual Conference of the Advanced School for Computing and Imaging(ASCI05), volume 3, pages 60-65.
  10. Rusu, R. B. (2010). Sematic 3d object maps for everyday manipulation in human living environments. In Articial Intelligence(KI-Kuenstliche Intelligenz).
  11. Rusu, R. B., Blodow, N., and Beetz, M. (2009). Fast point feature histograms(fpfh) for 3d registration. In In Proceedings of the IEEE International Conference on Robotics and Automation(ICRA), Kobe, Japan,pages 3212-3217.
  12. Sun, Y., Paik, J., Koschan, A., Page, D. L., and Abidi, M. A. (2003). Point fingerprint: a new 3-d object representation scheme. In IEEE Transactions on Systems, Man, and Cybernetics, Part B, 33(4):712-717.
  13. Tombari, F., Salti, S., and Stefano, L. D. (2010). Unique signatures of histograms for local surface description. In 11th European Conference on Computer Vision(ECCV), September 5-11, Hersonissos, Greece.
  14. Tombari, F. and Stefano, L. D. (2010). Object recognition in 3d scenes with occlusions and clutter by hough voting. In 2010 Fouth Pacific-Rim Symposium on Image and video Technology, pages 349-355.
  15. Wahl, E., Hillenbrand, U., and Hirzinger, G. (2003). Surfletpair-relation histograms: A statistical 3d-shape representation for rapid classification. In Forth International Conference on 3-D Digital Imaging and Modeling(3DIM 2003) 6-10 October 2003, Banff, Alberta, Canada, IEEE Computer Society Press, pages 474- 481.

Paper Citation

in Harvard Style

Ejima T., Enokida S., Horiuchi T., Ideguchi H. and Kouno T. (2014). 3D Object Recognition based on the Reference Point Ensemble . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2014) ISBN 978-989-758-009-3, pages 261-269. DOI: 10.5220/0004651802610269

in Bibtex Style

author={Toshiaki Ejima and Shuichi Enokida and Tomoyuki Horiuchi and Hisashi Ideguchi and Toshiyuki Kouno},
title={3D Object Recognition based on the Reference Point Ensemble},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2014)},

in EndNote Style

JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2014)
TI - 3D Object Recognition based on the Reference Point Ensemble
SN - 978-989-758-009-3
AU - Ejima T.
AU - Enokida S.
AU - Horiuchi T.
AU - Ideguchi H.
AU - Kouno T.
PY - 2014
SP - 261
EP - 269
DO - 10.5220/0004651802610269