we can not directly compute σ
i
, we approximate it
with an indirect measurement: for example consid-
ering the amplitude of the area near θ
i
in the energy
function minimized in (5) or considering σ
i
propor-
tional to σ
η
(1). Noise standard deviation is estimated
using (Donoho and Johnstone, 1994). Given a datum
(x
i
,θ
i
), we assign a full vote to all the exact solutions
and we spread smaller votes to the neighboring pa-
rameters, according to the errors in θ
i
.
Let now p = (p
1
, p
2
) represent a coordinate sys-
tem in the parameters space and assume θ
i
= 0 and
x
i
= p
i
= (0, 0). Let now model the vote spread
assuming that along the line p
1
= 1 the errors are
distributed as σ
i
√
2π ·N(0,σ
i
). We model the vote
spread so that along line p
1
= k, the votes are still
Gaussian distributed with a full vote at the exact so-
lution (k,0) and for neighboring parameters the votes
depend only on the angular distance from θ
i
, see Fig-
ure 4. Therefore the following weight function is used
for distributing the votes in the parameter space (when
x
i
= p
i
= (0,0) and θ
i
= 0),
v
i
(p
1
, p
2
) = e
−
p
2
2
1+p
2
1
σ
2
i
, (7)
The votes weight function v
i
, associated to other data
(x
i
,θ
i
), correspond to Equation (7) opportunely ro-
tated and translated. When all pairs (x
i
,θ
i
) i = 1, .., N
have been considered, the parameter that received the
highest vote is taken as the solution, i.e.
ˆ
p = argmax
p∈P
V (p), being V (p) =
N
∑
i=1
v
i
(p). (8)
The coordinates of C = π∩a are determined from
ˆ
p.
3.3 Conic Section Blurring Paths
Assuming circular blurring paths reduces the com-
plexity load but gives inaccurate solutions whenever
a is not perpendicular to π. We present an algorithm
for estimating a and ω when V ∈ a and a is in a gen-
eral position w.r.t. π. In particular, if we call π
C
a
plane perpendicular to a, π
C
is obtained by two rota-
tions of α and β from π. We do not consider V /∈a as
in this case the blur would depend on the scene depth.
Votes in the parameters space show at a glance
what happens assuming circular blurring paths when
a is not orthogonal to π. Figure 5.a shows a blurred
image produced when the plane orthogonal to a forms
angles α
∗
= 45
◦
and β
∗
= 0
◦
with π. If we treat the
blurring paths as circumferences, the votes in the pa-
rameters space do not point out a clear solution, as
shown in Figure 5.b and 5.c.
Directions θ
i
obtained from (5) represent the blur-
ring paths tangent direction, even when the blurring
paths are conic sections. But the blurring paths them-
selves are not circumferences, thus lines perpendic-
ular to these tangent lines do not cross at the same
point.
From basic 3D geometry considerations, and as
pointed out in (Klein and Drummond, 2005), it fol-
lows that the blurring paths are circumferences on
an ideal spherical sensor S, Figure 2.b. Then, if we
project the image from π on S surface, the blurring
paths become circumferences. Each of these circum-
ferences belongs to a plane and all these planes have
the same normal: the rotation axis a. Let now con-
sider one of these planes, π
C
, tangent to the sphere.
The projections of the blurring paths on π
C
are cir-
cumferences, Figure 2.b.
The plane π and the plane π
C
are related by a pro-
jective transformation determined by two parameters,
namely (α,β), the angles between the two planes. De-
fine the map M
α,β
: π 7→ π
α,β
as the projection from V
between π and π
α,β
, which is the plane tangent to S,
forming angles (α,β) with π (Rothwell et al., 1992).
We search for (α,β) that project the blurring paths
into circumferences, by modifying the voting proce-
dure of Section 3.2.
There is no need to transform the whole image
with M
α,β
as each l
i
, the line tangent to the blur-
ring path at x
i
, can be directly mapped via M
α,β
.
Let v
α,β
i
be the weight function (7) associated to data
(x
i
,θ
i
) i = 1,..,N mapped via M
α,β
. The parameters
pair identifying the plane π
C
is estimated as
(
ˆ
α,
ˆ
β) = argmax
α,β
V
α,β
(
ˆ
p
α,β
), (9)
being
ˆ
p
α,β
= argmax
p∈P
V
α,β
(p), V
α,β
(p) =
N
∑
i=1
v
α,β
i
(p).
(10)
Figure 5.d and 5.e represent the votes in case the
data have been transformed according to the correctly
estimated parameters
ˆ
α = 45
◦
,
ˆ
β = 0
◦
. These votes
are much more concentrated than votes in Figure 5.b
and 5.c.
Once
ˆ
α and
ˆ
β have been estimated, the cam-
era rotation axis a is determined and it is possible
to map the image z to M
ˆ
α,
ˆ
β
(z). As said before, in
M
ˆ
α
,
ˆ
β
(z) the blurring paths are circumferences cen-
tered at M
ˆ
α,
ˆ
β
(C) ≡ π
C
∩a and it is therefore possible
to transform M
ˆ
α
,
ˆ
β
(z) in polar coordinates for estimat-
ing the angular speed.
3.4 Angular Speed Estimation
Once C has been determined, it is possible to trans-
form M
ˆ
α,
ˆ
β
(z) (the image projected on π
C
) on a polar
VISAPP 2008 - International Conference on Computer Vision Theory and Applications
392