Figure 6: Multithread vs GPU versions in real pipeline.
As Fig. 6 shows, the times of the 3 versions in the
normal running of the pipeline are different from the
time average. Also, there is a shift in the times, which
is sharper in the GPU versions. The reason is that the
GPU is already saturated with the calculations of the
MC and the normals and the rendering of the scene
and calculation of occlusion textures. However the
CPU has the same work load in the whole pipeline
than in the isolated one, so there is no significant time
shift in the multithread version. The most effective
algorithm in the current system is the CPU based one.
The rendering system was calculated to adapt it
to the load of the videoconferencing system and the
available resources. In this way, an acceptable frame
rate for a videoconference (always over 15 frames)
was guaranteed. Exchangeable implementations were
developed (using CPU-GPU) for that purpose.
4.4 Hardware Setup
The desktop where all these time results were calcu-
lated had the following characteristics: a PC with a
NVIDIA Geforce 280 GTX graphics card; an Intel
Core 2 Quad Q9550 CPU clocked at 2.66 GHz; and
3.25 Gb RAM of a 4.028 Mb system were usable be-
cause of the operative system used.
5 CONCLUSIONS
This comparative study about MC implementations
shows how the rendering process influences the
videoconferencing system. Moreover, since the algo-
rithms implemented in a CPU are also performed on
a GPU, the final results show how the algorithms are
different in terms of efficiency. The performance of
each option depends on the system loads, and, specif-
ically, whether or not the CPU or the GPU is more
or less saturated. To avoid a significant decline in the
real-time frame rate and in the user’s sense of immer-
sion in the virtual environment, it is important not to
forget that the GPU is used to represent virtual envi-
ronments and the consumption of resources depends
on the detail needed, as we have shown in this work.
ACKNOWLEDGEMENTS
This work has been partially supported by the Span-
ish Administration agency CDTI under the VISION
project of the CENIT programme and by Ministerio
de Educaci
´
on y Ciencia of Spain. The authors are
very grateful to all the partners of this project, espe-
cially to Telefonica I+D for coordinating the project.
REFERENCES
Chu, R., Tenedorio, D., Schulze, J., Kuwabara, S.,
Nakazawa, A., Takemura, H., and Lin, F. (2009). Op-
timized Rendering for a Three-Dimensional Video-
conferencing System. In eScience, 2008. eScience’08.
IEEE Fourth International Conference on, pages 540–
546. IEEE.
Dyken, C., Ziegler, G., Theobalt, C., and Seidel, H.-P.
(2008). High-speed marching cubes using histopyra-
mids. Comput. Graph. Forum, 27(8):2028–2039.
Harris, M., Sengupta, S., and Owens, J. (2007). Parallel pre-
fix sum (scan) with CUDA. GPU Gems, 3(39):851–
876.
Kauff, P. and Schreer, O. (2002). An immersive 3D video-
conferencing system using shared virtual team user
environments. In Broll, W., Greenhalgh, C., and
Churchill, E. F., editors, CVE, pages 105–112. ACM.
Lorensen, W. E. and Cline, H. E. (1987). Marching cubes:
A high resolution 3d surface construction algorithm.
In SIGGRAPH ’87: Proceedings of the 14th an-
nual conference on Computer graphics and interac-
tive techniques, pages 163–169, New York, NY, USA.
ACM.
Magnor, M. A. (2005). Video-Based Rendering. Ed. Sales
and Customer Service Office.
Newman, T. and Yi, H. (2006). A survey of the marching
cubes algorithm. Computers & Graphics, 30(5):854–
879.
Nvidia (2008). NVIDIA CUDA Compute Unified Device Ar-
chitecture: Programming Guide, version 2.0 edition.
Pierleoni, P., Fioretti, F., Cancellieri, G., Di Biase, T.,
Pasqualini, S., and Nicolini, F. (2006). Audio Render-
ing System For Multimedia Applications. Distributed
Cooperative Laboratories: Networking, Instrumenta-
tion, and Measurements, page 61.
Regenbrecht, H., Ott, C., Wagner, M., Lum, T., Kohler, P.,
Wilke, W., and Mueller, E. (2003). An augmented vir-
tuality approach to 3D videoconferencing. In ISMAR,
pages 290–291. IEEE Computer Society.
Riegel, T. and Kaup, A. (1997). Shape initialization of 3D
objects in videoconference scenes. In Proceedings of
SPIE, volume 3012, page 116.
Weik, S., Wingbermuhle, J., and Niem, W. (1998). Auto-
matic creation of flexible antropomorphic models for
3Dvideoconferencing. In Computer Graphics Inter-
national, 1998. Proceedings, pages 520–527.
GRAPP 2011 - International Conference on Computer Graphics Theory and Applications
206