Shape from Silhouette in Space, Time and Light Domains

Maxim Mikhnevich, Denis Laurendeau

Abstract

This paper presents an image segmentation approach for obtaining a set of silhouettes along with the Visual Hull of an object observed from multiple viewpoints. The proposed approach can deal with mostly any type of appearance characteristics such as textured or textureless, shiny or lambertian surface reflectance, opaque or transparent objects. Compared to more classical methods for silhouette extraction from multiple views, for which certain assumptions are made on the object or scene, neither the background nor the object’s appearance properties are modeled. The only assumption is the constancy of the unknown background at a given camera viewpoint while the object is under motion. The principal idea of the method is the estimation of the temporal evolution of each pixel over time which leads to the ability to estimate the background likelihood. Furthermore, the object is captured under different lighting conditions in order to cope with shadows. All the information from the space, time and lighting domains is merged based on a MRF framework and the constructed energy function is minimized via graph cuts.

References

  1. Baumgart, B. G. (1974). Geometric modeling for computer vision. PhD thesis, Stanford, CA, USA.
  2. Boykov, Y. and Jolly, M.-P. (2001). Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images. In Eighth IEEE International Conference on Computer Vision (ICCV), volume 1, pages 105 -112.
  3. Campbell, N., Vogiatzis, G., Hernndez, C., and Cipolla, R. (2007). Automatic 3d object segmentation in multiple views using volumetric graph-cuts. In British Machine Vision Conference, volume 1, pages 530-539.
  4. Jagers, M., Birkbeck, N., and Cobzas, D. (2008). A threetier hierarchical model for capturing and rendering of 3d geometry and appearance from 2d images. In International Symposium on 3-D Data Processing, Visualization, and Transmission (3DPVT).
  5. Lee, W., Woo, W., and Boyer, E. (2007). Identifying foreground from multiple images. In Eighth Asian conference on Computer vision (ACCV), pages 580-589.
  6. Matusik, W., Pfister, H., Ngan, A., Beardsley, P., Ziegler, R., and McMillan, L. (2002). Image-based 3d photography using opacity hulls. ACM Transactions on Graphics, 21(3):427-437.
  7. Parks, D. H. and Fels, S. S. (2008). Evaluation of background subtraction algorithms with post-processing. In International Conference on Advanced Video and Signal Based Surveillance, pages 192-199.
  8. Piccardi, M. (2004). Background subtraction techniques: a review. In International Conference on Systems, Man & Cybernetics (SMC), pages 3099-3104.
  9. Radke, R. J., Andra, S., Al-Kofahi, O., and Roysam, B. (2005). Image change detection algorithms: A systematic survey. IEEE Transactions on Image Processing, 14(3):294-307.
  10. Rother, C., Kolmogorov, V., and Blake, A. (2004). ”grabcut”: interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics, 23(3):309-314.
  11. Smith, A. R. and Blinn, J. F. (1996). Blue screen matting. In ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pages 259-268.
  12. Snow, D., Viola, P., and Zabih, R. (2000). Exact voxel occupancy with graph cuts. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1:1345.
  13. Sun, J., Kang, S. B., Xu, Z., Tang, X., and Shum, H.-Y. (2007). Flash cut: Foreground extraction with flash and no-flash image pairs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  14. Wu, C., Liu, Y., Ji, X., and Dai, Q. (2009). Multi-view reconstruction under varying illumination conditions. In Proceedings of the IEEE international conference on Multimedia and Expo, pages 930-933.
  15. Zongker, D. E., Werner, D. M., Curless, B., and Salesin, D. H. (1999). Environment matting and compositing. In ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pages 205-214.
Download


Paper Citation


in Harvard Style

Mikhnevich M. and Laurendeau D. (2014). Shape from Silhouette in Space, Time and Light Domains . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2014) ISBN 978-989-758-009-3, pages 368-377. DOI: 10.5220/0004722403680377


in Bibtex Style

@conference{visapp14,
author={Maxim Mikhnevich and Denis Laurendeau},
title={Shape from Silhouette in Space, Time and Light Domains},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2014)},
year={2014},
pages={368-377},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004722403680377},
isbn={978-989-758-009-3},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2014)
TI - Shape from Silhouette in Space, Time and Light Domains
SN - 978-989-758-009-3
AU - Mikhnevich M.
AU - Laurendeau D.
PY - 2014
SP - 368
EP - 377
DO - 10.5220/0004722403680377