(a) Original
Kinect point
cloud.
(b) Interpolated
point cloud.
(c) HD point
cloud generated
by our method.
Figure 10: Verification through interpolation and compar-
ison. The area selected by the red rectangle shows the part
selected for quantitative estimation of the depth variations.
4 CONCLUSIONS
We have presented a methodology which combines
HD resolution images with the low resolution Kinect
to produce high-resolution dense point cloud using
graph cut. Firstly, Kinect and HD cameras are regis-
tered to transfer Kinect point cloud to the HD camera
for obtaining high resolution point cloud space. Then,
we discretize the point cloud in voxel space and for-
mulate a graph cut formulation which take care of the
neighbor smoothness factor. This methodology pro-
duces good high resolution image with the help of low
resolution Kinect point cloud which could be useful in
building high resolution model using Kinect.
ACKNOWLEDGEMENTS
The authors gratefully acknowledge Dr. Subodh Ku-
mar, Neeraj Kulkarni, Kinshuk Sarabhai and Shruti
Agarwal for their constant help in providing several
tools for Kinect data acquisition, module and error
notification respectively.
Authors also acknowledge Department of Science
and Technology, India for sponsoring the project on
“Acquisition, representation, processing and display
of digital heritage sites” with number “RP02362”
under the India Digital Heritage programme which
helped us in acquiring the images at Hampi in Kar-
nataka, India.
REFERENCES
Alexa, M. and Behr, J. (2003). Computing and rendering
point set surfaces. In IEEE Transactions on Visualiza-
tion and Computer Graphics.
Boykov, Y. and Kolmogorov, V. (2004). An experimental
comparison of min-cut/max-flow algorithms for en-
ergy minimization in vision. In IEEE Transactions
on Pattern Analysis and Machine Intelligence, Vol. 26,
No. 9, pages 1124–1137.
Boykov, Y., Veksler, O., and Zabih, R. (2001). Fast approx-
imate energy minimization via graph cuts. In IEEE
Transactions on Pattern Analysis and Machine Intel-
ligence, vol. 23, pages 1222–1239.
Diebel, J. and Thrun, S. (2006). An application of markov
random fields to range sensing. in advances in neural
information processing. In Advances in Neural Infor-
mation Processing Systems, page 291 298.
Hartley, R. and Zisserman, A. (2004). Multiple View Geom-
etry in Computer Vision. Cambridge University Press,
New York, 2nd edition.
Kolmogorov, V. and Zabih, R. (2004). What energy func-
tions can be minimized via graph cuts? In IEEE
Transactions on Pattern Analysis and Machine Intel-
ligence, pages 147–159.
Kutulakos, K. and Seitz, S. (1999). A theory of shape by
space carving. In 7th IEEE International Conference
on Computer Vision, volume I, page 307 314.
Morel, J.-M. and Yu, G. (2009). Asift: A new framework
for fully affine invariant image comparison. In SIAM
Journal on Imaging Sciences. Volume 2 Issue 2.
Qingxiong Yang, Ruigang Yang, J. D. and Nistr, D. (2007).
Spatial-depth super resolution for range images. In
IEEE Conference on Computer Vision and Pattern
Recognition.
Schuon, S., Theobalt, C., Davis, J., and Thrun, S. (2008).
High-quality scanning using time-of-flight depth su-
perresolution. In IEEE Computer Society Conference
on Computer Vision and Pattern Recognition Work-
shops.
Schuon, S., Theobalt, C., Davis, J., and Thrun, S. (2009).
Lidarboost depth superresolution for tof 3d shape
scanning. In IEEE Conference on Computer Vision
and Pattern Recognition.
Slabaugh, G. and Schafer, R. (2003). Methods for volumet-
ric reconstruction of visual scenes. In IJCV 2003.
Smisek, J., Jancosek, M., and Pajdla, T. (2011). 3d with
kinect. In IEEE Workshop on Consumer Depth Cam-
eras for Computer Vision.
Zhang, Z. (2000). A flexible new technique for camera cal-
ibration. In IEEE Transactions On Pattern Analysis
And Machine Intelligence, VOL. 22, NO. 11.
VISAPP 2012 - International Conference on Computer Vision Theory and Applications
316