Combining Dense Features with Interest Regions for Efficient Part-based Image Matching

Priyadarshi Bhattacharya, Marina L. Gavrilova

2014

Abstract

One of the most popular approaches for object recognition is bag-of-words which represents an image as a histogram of the frequency of occurrence of visual words. But it has some disadvantages. Besides requiring computationally expensive geometric verification to compensate for the lack of spatial information in the representation, it is particularly unsuitable for sub-image retrieval problems because any noise, background clutter or other objects in vicinity influence the histogram representation. In our previous work, we addressed this issue by developing a novel part-based image matching framework that utilizes spatial layout of dense features within interest regions to vastly improve recognition rates for landmarks. In this paper, we improve upon the previously published recognition results by more than 12% and achieve significant reductions in computation time. A region of interest (ROI) selection strategy is proposed along with a new voting mechanism for ROIs. Also, inverse document frequency weighting is introduced in our image matching framework for both ROIs and dense features inside the ROIs. We provide experimental results for various vocabulary sizes on the benchmark Oxford 5K and INRIA Holidays datasets.

References

  1. Arandjelovic, R. and Zisserman, A. (2012). Three things everyone should know to improve object retrieval. In IEEE Conference on Computer Vision and Pattern Recognition.
  2. Bhattacharya, P. and Gavrilova, M. L. (2013). Spatial consistency of dense features within interest regions for efficient landmark recognition. The Visual Computer, 29(6-8):491-499.
  3. Cao, Y., Wang, C., Li, Z., Zhang, L., and Zhang, L. (2010). Spatial-bag-of-features. In CVPR, pages 3352-3359.
  4. Chatfield, K., Lempitsky, V., Vedaldi, A., and Zisserman, A. (2011). The devil is in the details: an evaluation of recent feature encoding methods. In British Machine Vision Conference.
  5. Chum, O., Philbin, J., Sivic, J., Isard, M., and Zisserman, A. (2007). Total Recall: Automatic Query Expansion with a Generative Feature Model for Object Retrieval. In ICCV, pages 1-8.
  6. Jegou, H., Douze, M., and Schmid, C. (2008). Hamming embedding and weak geometric consistency for large scale image search. In ECCV, pages 304-317.
  7. Jegou, H., Douze, M., and Schmid, C. (2010). Improving bag-of-features for large scale image search. International Journal of Computer Vision, 87(3):316-336.
  8. Lin, Z. and Brandt, J. (2010). A local bag-of-features model for large-scale object retrieval. In European conference on Computer vision.
  9. Mikolajczyk, K. and Schmid, C. (2004). Scale & affine invariant interest point detectors. International Journal of Computer Vision, 60(1):63-86.
  10. Mikulk, A., Perdoch, M., Chum, O., and Matas, J. (2010). Learning a fine vocabulary. In European Conference on Computer Vision, volume 6313 of Lecture Notes in Computer Science, pages 1-14. Springer.
  11. Perdoch, M., Chum, O., and Matas, J. (2009). Efficient representation of local geometry for large scale object retrieval. In CVPR, pages 9-16.
  12. Philbin, J., Chum, O., Isard, M., Sivic, J., and Zisserman, A. (2007). Object retrieval with large vocabularies and fast spatial matching. In CVPR.
  13. Philbin, J., Chum, O., Isard, M., Sivic, J., and Zisserman, A. (2008). Lost in quantization: Improving particular object retrieval in large scale image databases. In CVPR.
  14. Vedaldi, A. and Fulkerson, B. (2012). VLFeat: An open and portable library of computer vision algorithms. Available at http://www.vlfeat.org/.
  15. Wu, Z., Ke, Q., Isard, M., and Sun, J. (2009). Bundling features for large scale partial-duplicate web image search. In CVPR, pages 25-32.
  16. Zhao, W. (2010). LIP-VIREO: Local interest point extraction toolkit. Available at http://www.cs.cityu.edu.hk/ wzhao2/.
Download


Paper Citation


in Harvard Style

Bhattacharya P. and Gavrilova M. (2014). Combining Dense Features with Interest Regions for Efficient Part-based Image Matching . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014) ISBN 978-989-758-004-8, pages 68-75. DOI: 10.5220/0004684000680075


in Bibtex Style

@conference{visapp14,
author={Priyadarshi Bhattacharya and Marina L. Gavrilova},
title={Combining Dense Features with Interest Regions for Efficient Part-based Image Matching},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014)},
year={2014},
pages={68-75},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004684000680075},
isbn={978-989-758-004-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: VISAPP, (VISIGRAPP 2014)
TI - Combining Dense Features with Interest Regions for Efficient Part-based Image Matching
SN - 978-989-758-004-8
AU - Bhattacharya P.
AU - Gavrilova M.
PY - 2014
SP - 68
EP - 75
DO - 10.5220/0004684000680075