High pass
filtering
Normalized
correlation
Detection
High pass
filtering
W
Im.
1/0
Figure 4: Watermark detection.
3 EXPERIMENTAL RESULTS
Buddha light fields (The Stanford Light Field
Archive, 2007) are used during the simulations. We
tested various watermarking sequences (as shown in
Table 1) in order to conclude to the one with the best
performance
.
Table 1: Watermark test sequences.
No DISTRIBUTION MATLAB CODE
1
Chi-Square random(‘chi2’ , v , 256 , 256)
2
Exponential random(‘exp’ , μ , 256 , 256)
3
Geometric random(‘geo’ , p , 256 , 256)
4
Poisson random(‘poiss’ , λ , 256 , 256)
5
Rayleigh random(‘rayl’ , b , 256 , 256)
6
Beta random(‘beta’ , a , b , 256 , 256)
7
Binomial random(‘bino’, n , p , 256 , 256)
8
Extreme Value random(‘ev’, μ , σ , 256 , 256)
9
Gamma random(‘gam’ , a , b , 256 , 256)
10
Negative Binomial random(‘nbin’ , p , r , 256 , 256)
11
Normal (Gaussian) random(‘norm’ , μ , σ , 256 , 256)
12
Uniform random(‘unif’ , a , b , 256 , 256)
13
Weibull random(‘wbl’, a , b , 256 , 256)
14
Generalized Pareto random(‘gp’, κ , σ , θ , 256 , 256)
15
Hypergeometric random(‘hyge’, M , K , n , 256,256)
We also tested the sequences: Extreme Value,
F, Non-central F, Lognormal, Student’s distribution
and Non-central T distributions which however did
not yield acceptable results and as a result are not
considered further.
As it was stated in the introduction, the main
problem in watermarking of free view point video is
the successful extraction of the watermark from a
random generated view. So, we tested the
watermarks efficiency on imaginary views created
by rendering procedures on the source data of
Buddha’s images. The creation of a rendered view
from the light field of Buddha’s images requires the
determination of some characteristics:
First thing is the interpolation method for the
construction of the rendered view. We selected to
have two choices:
1. Bilinear Interpolation.
2. Nearest Neighborhood Interpolation.
Secondly is the viewing position or the
determination of the imagery camera’s spot. This
spot is determined by two elements:
1. The coordinates of the spot of virtual
camera.
2. The orientation, as a vector with
coordinates, for the virtual camera.
Figure 5: Configurations for the imagery camera position
and rotation.
In (Apostolidis, Koz and Triantafyllidis, 2007),
tests are carried out for the cases 1 and 2 (see Fig. 5)
where the imagery camera is located in the camera
plane. In this paper, we focus on case 3 where the
imagery camera is in an arbitrary position and its
rotation is not unity. More specifically we select 4
positions of the imagery camera (within case 3) in
order to include all the possible viewing positions of
case 3 (zoom in and out, rotation left or right). Figs
6,7,8,9 illustrate the four data sets that are used for
our experimental results.
(a) (b)
Figure 6: Test 1: Imagery camera rendered views with
zoom in.
(a): Nearest Neighborhood
(b): Bilinear
Virtual camera’s adjustment:
Camera’s center coordinates: [0, 0, 1.8]
Orientation Vector: [0, 0, 1]
VISAPP 2008 - International Conference on Computer Vision Theory and Applications
148