(a) (b) (c)
Figure 9: Results of quadtree virtual planes-based recon-
struction. They are after assembling the quadtree-based de-
composed registration layers (here, 47 layers). (a) maxi-
mum resolution (the decomposition block’s size is 1). For
(b) and (c) the size of the decomposition blocks are 8 and
16, respectively.
seen in this example, three quarters and two octants
are completely empty and just two octants are par-
tially occupied.
After repeating the operations for all 47 virtual
registration planes and assembling them together, the
result become the 3D reconstruction of the object.
Fig. 9 demonstrates the result of the 3D reconstruc-
tion. Fig. 9-a is the result when the maximum reso-
lution, or in other words blocks with the size equal to
one, has been used for each registration layer. Fig. 9-
b and Fig. 9-c are the results when the decomposition
blocks with the size of 8 and 16 have been used, re-
spectively. Depend to the application and the volume-
size of the scene, the resolution for the decomposi-
tion blocks and moreover, the horizontal resolution
(the distance between registration planes which is in-
dicated as △ h in the Algorithm 1), can be adjusted.
6 CONCLUSIONS
A multi-resolution 3D reconstruction using inertial-
visual data fusion has been proposed in this paper.
The proposed approach is based on obtaining the ho-
mography matrices among a set of virtual planes and
the image plane. An algorithm has been introduced
in order to perform the proposed 3D reconstruction
method and produces a set of quadtree data structure.
Depend to the application and the volume-size of the
scene, the resolution for the decomposition blocks can
be adjusted. Moreoverfor the same reason the vertical
distance among the virtual registration layers can be
increased or decreased in order to adjust the interest-
ing resolution. Finally, experimental results demon-
strate the efficacy of using the proposed method for
the sake of 3D volumetric reconstruction.
REFERENCES
Aliakbarpour, H. and Dias, J. (2010a). Human silhouette
volume reconstruction using a gravity-based virtual
camera network. In Proceedings of the 13th Interna-
tional Conference on Information Fusion, 26-29 July
2010 EICC Edinburgh, UK.
Aliakbarpour, H. and Dias, J. (2010b). Imu-aided 3d re-
construction based on multiple virtual planes. In
DICTA’10 (the Australian Pattern Recognition and
Computer Vision Society Conference), IEEE Com-
puter Society Press, 1-3 December 2010, Sydney, Aus-
tralia.
Bleser, Wohlleber, Becker, and Stricker. (2006). Fast and
stable tracking for ar fusing video and inertial sen-
sor data. pages 109–115. Short Papers Proceedings.
Plzen: University of West Bohemia.
Bouguet, J.-Y. Camera calibration toolbox for matlab. In
www.vision.caltech.edu/bouguetj.
Calbi, A., Regazzoni, C. S., and Marcenaro, L. (2006).
Dynamic scene reconstruction for efficient remote
surveillance. In IEEE International Conference
on Advanced Video and Signal Based Surveillance
(AVSS’06).
Colleu, T., Morin, L., Labit, C., Pateux, S., and Balter, R.
(2009). Compact quad-based representation for 3d
video. In 3DTV Conference: The True Vision - Cap-
ture, Transmission and Display of 3D Video, 2009,
pages 1 –4.
Dias, J., Lobo, J., and Almeida, L. A. (2002). Cooperation
between visual and inertial information for 3d vision.
In Proceedings of the 10th Mediterranean Conference
on Control and Automation - MED2002 Lisbon, Por-
tugal, July 9-12, 2002.
Franco, J.-S. and Boyer, E. (2005). Fusion of multi-view
silhouette cues using a space occupancy grid. In Pro-
ceedings of the Tenth IEEE International Conference
on Computer Vision (ICCV05).
Hartley, R. and Zisserman, A. (2003). Multiple View Geom-
etry in Computer Vision. CAMBRIDGE UNIVER-
SITY PRESS.
Kampel, M., Tosovic, S., and Sablatnig, R. (2002). Octree-
based fusion of shape from silhouette and shape from
structured light. In 3D Data Processing Visualization
and Transmission, 2002. Proceedings. First Interna-
tional Symposium on, IEEE.
Khan, S. M., Yan, P., and Shah, M. (2007). A homographic
framework for the fusion of multi-view silhouettes. In
Computer Vision, 2007. ICCV 2007. IEEE 11th Inter-
national Conference on.
Lai, P.-L. and Yilmaz, A. (2008). Projective reconstruc-
tion of building shape from silhouette images acquired
from uncalibrated cameras. In ISPRS Congress Bei-
jing 2008, Proceedings of Commission III.
Liu, S. and Cooper, D. (2010). Ray markov random fields
for image-based 3d modeling: Model and efficient in-
ference. In Computer Vision and Pattern Recogni-
tion (CVPR), 2010 IEEE Conference on, pages 1530
–1537.
MULTI-RESOLUTION VIRTUAL PLANE BASED 3D RECONSTRUCTION USING INERTIAL-VISUAL DATA
FUSION
117