(a)
(b)
Figure 7: This figure shows the result of the trajectory of a
person that enters the room in the bottom, walks to the table
(yellow) and sits there on a chair (green), stands up from
this chair and walks around his workplace. The blue dots
represent our proposed algorithm and produce output for the
complete trajectory (even with severe occlusion due to the
table, TV screen (blue) and the wall (gray). The traditional
visual hull algorithm only outputs positions for 36.9% of the
frames. Visual inspection learns that the blue dots are much
closer to the actual person than the red dots. The camera
setup is the same as in figure 3.
can be treated as a form of occlusion.
The algorithm can handle both static and dynamic
occlusion because it operates on a frame by frame ba-
sis without temporal information of the occluders. In
future work this information could be integrated.
REFERENCES
Allied Vision Technologies. Manta G-046C. http://
www.alliedvisiontec.com/us/products/cameras/gigabit-
ethernet/manta/g-046bc.html. Accessed: 2014-09-14.
Guan, L., Sinha, S., Franco, J.-S., and Pollefeys, M. (2006).
Visual hull construction in the presence of partial
occlusion. In 3D Data Processing, Visualization,
and Transmission, Third International Symposium on,
pages 413–420. IEEE.
Laurentini, A. (1994). The visual hull concept for
silhouette-based image understanding. Pattern Anal-
ysis and Machine Intelligence, IEEE Transactions on,
16(2):150–162.
Laurentini, A. (1997). How many 2d silhouettes does it
take to reconstruct a 3d object? Computer Vision and
Image Understanding, 67(1):81–87.
Laurentini, A. (1999). The visual hull of curved objects. In
In Proceedings of ICCV99, Corfu, pages 356–361.
Ober-Gecks, A., Haenel, M., Werner, T., and Henrich, D.
(2014). Fast multi-camera reconstruction and surveil-
lance with human tracking and optimized camera con-
figurations. In ISR/Robotik 2014; 41st International
Symposium on Robotics; Proceedings of, pages 1–8.
VDE.
Slembrouck, M., Van Cauwelaert, D., Van Hamme, D.,
Van Haerenborgh, D., Van Hese, P., Veelaert, P., and
Philips, W. (2014). Self-learning voxel-based multi-
camera occlusion maps for 3d reconstruction. In 9th
International Joint Conference on Computer Vision,
Imaging and Computer Graphics Theory and Appli-
cations (VISAPP-2014). SCITEPRESS.
St-Charles, P.-L., Bilodeau, G.-A., and Bergevin, R. (2014).
Flexible background subtraction with self-balanced
local sensitivity. In Proceedings of IEEE Workshop
on Change Detection.
Stengel, D., Wiedemann, T., and Vogel-Heuser, B. (2012).
Efficient 3d voxel reconstruction of human shape
within robotic work cells. In Mechatronics and Au-
tomation (ICMA), 2012 International Conference on,
pages 1386–1392. IEEE.
Toth, C., O’Rourke, J., and Goodman, J. (2004). Hand-
book of Discrete and Computational Geometry, Sec-
ond Edition. Discrete and Combinatorial Mathematics
Series. Taylor & Francis.
Wang, Y., Jodoin, P.-M., Porikli, F., Konrad, J., Benezeth,
Y., and Ishwar, P. (2014). Cdnet 2014: An expanded
change detection benchmark dataset. In Proceedings
of the IEEE Conference on Computer Vision and Pat-
tern Recognition Workshops, pages 387–394.
VISAPP2015-InternationalConferenceonComputerVisionTheoryandApplications
642