3 PREVIOUS WORK
This section briefly describes about the conventional
solutions of (Kalantari et al., 2009a) and (Fraundorfer
et al., 2010), and points out the drawbacks of them.
The algorithm outlines of them are shown in Figures
2(a) and 2(b), respectively.
3.1 Kalantari et al.’s Solution
Kalantari et al. propose a solution to obtain all un-
knowns in Eq. (5) by solving a system of multivariate
polynomial equations.
Firstly, the Weierstrass substitution is used to ex-
press cos(θ) and sin(θ) without the trigonometric
functions: cos(θ) = (1 − p
2
)/(1 − p
2
) and sin(θ) =
2p/(1 + p
2
), where p = tan(θ/2). By substituting
3 point correspondences into Eq. (5) and by adding
a new scale constraint kt
t
tk = 1, there are 4 polyno-
mial equations in 4 unknowns {t
x
,t
y
,t
z
, p} of degree
3. Kalantari et al. adopt a Gr¨obner basis method to
solve the system of polynomial equations. The solu-
tions are obtained by Gauss-Jordan elimination of a
65× 77 Macaulay matrix and eigenvalue decomposi-
tion of a 12 × 12 Action matrix. Finally, at most 12
solutions are given from the eigenvectors.
Kalantari et al.’s solution takes much more com-
putational cost than the point correspondence based
algorithms due to decomposition of large matrices.
Moreover, it is difficult to extend to the least squares
case in which the degree of polynomial equations be-
comes higher and the size of matrices becomes a few
hundred dimensions.
In the experiment in this paper, the size of the de-
composed matrices and the number of the solutions
are not same as (Kalantari et al., 2009a). The details
of the implementation are described in section 5.2.
3.2 Fraundorfer et al.’s Solution
Fraundorfer et al. estimate the essential matrix in Eq.
(6) instead of the physical parameters. The most im-
portant contribution is to propose solutions to the least
squares case.
Fraundorfer et al. propose 3 solutions to the case
of 3 point, 4 point and more than 5 point correspon-
dences. The basic idea is very similar to the point
correspondences based algorithms, i.e., the 5-point,
the 7-point and the 8-point algorithm.
From a set of n point correspondences, Eq. (6) can
be equivalently written as
M
M
Mvec(E
E
E) = 0
n×1
, (9)
where M
M
M =
x
x
x
1
⊗ x
x
x
′
1
·· · x
x
x
n
⊗ x
x
x
′
n
T
and vec( )
denotes the vectorization of a matrix. ⊗ denotes the
Kronecker product.
The solution of Eq. (9) is obtained by
E
E
E =
6−n
∑
i=1
a
i
V
V
V
i
, (10)
where V
V
V
i
is the matrix corresponding to the gen-
erators of the right nullspace of the coefficient matrix
M
M
M, and a
i
is an unknown coefficient.
Estimating E
E
E is equivalent to calculate a
i
. One of
a
i
can be set to 1 to reduce the number of unknowns
due to the scale ambiguity of E
E
E. In the 3-point case,
Eqs. (7) and (8) are used to solve 2 unknowns. Sim-
ilarly, Eq. (7) is used to solve 1 unknown in the 4-
point case. For more than 5 point correspondences,
the solution is obtained by taking the eigenvector cor-
responding to the smallest eigenvalue of M
M
M
T
M
M
M.
An essential matrix can be decomposed to 2 R
R
R
z
s
and ±t
t
t (Horn, 1990), (Hartley and Zisserman, 2004).
Fraundorfer et al.’s 3-point, 4-point and 5-point algo-
rithm estimate at most 4, 3 and 1 essential matrices,
respectively. Therefore, they give at most 16, 12 and
4 solutions.
Fraundorfer et al.’s 3-point algorithm satisfies all
the constraints. However, the 4-point algorithm con-
siders only one constraint, and the 5-point algorithm
does not consider any constraints. For this reason, the
solutions of the 4-point and the 5-point algorithm may
not be an essential matrix. To correct an estimated
E
E
E to an essential matrix, a constraint enforcement is
carried out by replacing the singular values of E
E
E so
that the two are nonzero and equal, and the third one
is zero. The enforcement does not guarantee to opti-
mize θ and t
t
t which minimize Eq. (6) but the change
of the Frobenius norm. The 4-point and the 5-point al-
gorithm do not minimize physically meaningful cost
function, therefore, they are not optimal solution.
4 PROPOSED SOLUTION
This section describes about the basic idea of the pro-
posed solution in the minimal case firstly, and how to
extend the idea to the least squares case secondly. The
algorithm outline is shown in Figure 2(c).
4.1 3-point Algorithm for the Minimal
Case
Equation (5) can be equivalently written as
v
v
v
T
t
t
t = 0, (11)
ARobustLeastSquaresSolutiontotheRelativePoseProblemonCalibratedCameraswithTwoKnownOrientationAngles
149