MULTI-LAYERED CONTENTS GENERATION FROM REAL
WORLD SCENE BY THREE-DIMENSIONAL MEASUREMENT
M. K. Kim
1
, Y. Nakajima
1, 2
, T. Takeshita
1
, S. Onogi
2
, M. Mitsuishi
1
and Y. Matsumoto
1, 2
1
School of Engineering, the University of Tokyo, Tokyo, Japan
2
Intelligent Modeling Laboratory, the University of Tokyo, Tokyo, Japan
Keywords: Layer Content, Three-dimensional measurement, Depth from Focus, Spatio-Temporal image analysis.
Abstract: In this paper, we propose a method to create automatically multi-layered contents from real world scene
based on Depth from Focus and Spatio-Temporal Image Analysis. Since the contents are generated by layer
representation directly from real world, the change of point of view is able to freely and it reduces the labor
and cost of creating three-dimensional (3-D) contents using Computer Graphics. To extraction layer in the
real images, Depth from Focus is used in case of stationary objects and Spatio-Temporal Image Analysis is
used in case of moving objects. We selected above two methods, because of stability of system. Depth from
Focus method doesn’t need to search correspondence point and Spatio-Temporal Image Analysis has also
simple computing algorithm relatively. We performed an experiment to extract layer contents from
stationary and moving object automatically and the feasibility of the method was confirmed.
1 INTRODUCTION
Three-dimensional (3-D) contents are required in
various field of Virtual Reality. It is various from
cultural asset to cityscape. In case of cultural asset,
its digitalization is required quite complex and
precise description. The cultural asset is digitalized
by using 3-D range measurement information of
object and Computer Graphics for delicate contents.
In this case, contents generation is also necessary to
measurement apparatus such as helicopter or Global
Positioning System. Hence, it needs huge labor and
cost.
On the other hand, 3-D content digitalization of
cityscape (e.g. building, car) is not necessary
delicateness such as cultural asset. For instance,
when you go sightseeing or visit virtual space, used
all contents should not be elaborate like the cultural
asset contents.
In the latter case, if real image is used to create
3-D contents directly, it is possible to generate
simply more real 3-D contents by less effort and
cost.
However, it has been a little problem. When we
see the street through the Street View, we often
don’t go where we want to see. Since photograph
data is expressed discretely.
Therefore it has limitation of viewpoint. Google
Street View is one example of latter case.
To solve this problem, we employ a solution for
3-D scene description. The concept is 3-D
representation with 2-D real image layer. In this
method, 3-D space and contents is represented by
using arrangement of 2-D layers and concerning
psychology factor. This 3-D representation method
is proposed and explained their effects by Ogi
(Ogi07). There are the following two advantages.
First, data volume is small, because one layer can
express multi-point of view. Second, we can get
continuous point of view that doesn’t get before.
Our team with Ogi have been developing Dome
Display Layer Representation System based on
Figure 1. In this system, 2-D contents are presented
in dome display and psychology factor (Seno08) is
considered at the same time. So, high presence is
obtained. In addition, data size is small because of
layer representation. And this system is preformed
three steps. First, 2-D layer contents are made and
segmented from the real world image. Next, Layers
are integrated according to the relation of the point
of view and the relation of 3-D position of layer.
Finally, these 2-D layer contents are presented on
dome display in consideration of the distortion.
109
Kim M., Nakajima Y., Takeshita T., Onogi S., Mitsuishi M. and Matsumoto Y. (2009).
MULTI-LAYERED CONTENTS GENERATION FROM REAL WORLD SCENE BY THREE-DIMENSIONAL MEASUREMENT.
In Proceedings of the Fourth International Conference on Computer Vision Theory and Applications, pages 109-112
DOI: 10.5220/0001820501090112
Copyright
c
SciTePress