ble, then quit, indicating that there is no repeated
solution.
3. Compute σ,τ,χ, T and S, using formulas (4).
4. If |S| is sufficiently small, then behave as though
S = 0, and continue this algorithm; otherwise quit,
indicating that there is no repeated solution.
5. Solve for u and v, using formulas (13). These for-
mulas uniquely determine a u with u ≥ 0, and a v
with |v| ≤ 1 (as can be proved).
6. Compute tentative values for r
1
,r
2
and r
3
using
formulas (10), and r
j
=
p
R
j
( j = 1,2,3).
7. Compute corresponding values for c
2
and c
3
using
formulas (5). Call these c
0
2
and c
0
3
though.
8. Test to see whether or not swapping c
0
2
and c
0
3
would cause them to be closer to the values of c
2
and c
3
(from step 2). If so, then swap r
2
and r
3
.
9. If any negation took place in step 2, then com-
pensate for this by now negating a corresponding
r
1
,r
2
or r
3
. Negate r
1
if c
2
and c
3
were negated;
negate r
2
if c
1
and c
3
were negated; negate r
3
if c
1
and c
2
were negated.
10. Return the repeated solution (r
1
,r
2
,r
3
).
Note that system (1) (using altered or unaltered c
j
)
has a repeated solution if and only if S = 0, and except
in some very special cases, a repeated solution is only
a double solution. Also, “closeness” in step 8 might
be decided by considering (c
0
2
−c
2
)
2
+(c
0
3
−c
3
)
2
ver-
sus (c
0
2
−c
3
)
2
+ (c
0
3
−c
2
)
2
. Although the correctness
of this algorithm is not proven here, the mathematical
analysis that led to it is described in Section 3. Addi-
tionally, the simulations to be discussed next attest to
its correctness as well.
2.3 Simulations
Simulations confirm the advantages of using the Dou-
ble Solution Algorithm when |S| is small. These sim-
ulations were performed using compiled Mathemat-
ica functions, running on an Intel Core Duo processor.
Thus the floating point computations were performed
using 64-bit IEEE floating point format. Even more
dramatic results can be expected in a 32-bit floating
point environment.
A radius-one danger cylinder was used. Five dif-
ferent distance ranges along the cylinder axes were
explored: 0-2, 2-4, 4-6, 6-8 and 8-10. A camera focal
point on the cylinder (within the given range) was ran-
domly selected, and the cosines c
1
,c
2
,c
3
computed.
DSA was tested against Grunert’s quartic polynomial
method, and the resulting computed distances for r
1
were compared with the actual value of r
1
.
Next, each of the three cosines was randomly
perturbed by adding or subtracting up to one one-
millionth to/from it, and the two methods were com-
pared again using the resulting data. This was again
repeated, but using a maximum adjustment of one
one-hundredth, rather than one one-millionth, for
each cosine. In this way, fifteen different experi-
ments (five distance ranges times three maximum co-
sine perturbation amounts) were considered. Each of
these experiments was performed one hundred thou-
sand times, and the results of these trials were aver-
aged.
When the computed cosines (c
1
,c
2
,c
3
) for a point
(essentially) on the danger cylinder were left unper-
turbed, the ratio of the average errors using Grunert’s
method versus DSA was between a hundred million
and a billion. Admittedly though, the likelihood of
having the camera’s focal point right on the danger
cylinder, within the computational tolerance of 64-
bit floating point arithmetic, is very small. Thus fur-
ther experiments were conducted using slightly al-
tered value of the cosines.
When the cosines were randomly perturbed by an
amount up to one one-millionth, the ratio of the av-
erage computed errors was as much as 52, when the
focal point was close to the reference point (the 0-2
range). But this ratio dropped to 14 when the focal
point was far away (8-10 range).
When the cosines were randomly perturbed by
an amount up to one one-hundredth, the error ratio
ranged between one and two. Thus the improvement
using DSA was modest in this case. Once again
though, computations performed using 32-bit arith-
metic, instead of 64-bit arithmetic, would more dra-
matically demonstrate a difference in accuracy be-
tween the two methods.
The ratio of the execution times for the two meth-
ods were also compared. Here though, it was difficult
to know how much of the timing reported by Math-
ematica was attributable to the overhead involved in
calling compiled functions from within the Mathe-
matica interpreter. In every case, the reported speedup
(ratio) was in excess of four. However, a quick check
of the actual computations involved in the two meth-
ods suggests that the true speedup should be consid-
erably higher.
3 MATHEMATICAL ANALYSIS
This section captures much of the reasoning underly-
ing DSA. The phrases “R-space,” “r-space” and “c-
space” will be used to refer to the abstract three-
dimensional spaces of (R
1
,R
2
,R
3
) points, (r
1
,r
2
,r
3
)
HANDLING REPEATED SOLUTIONS TO THE PERSPECTIVE THREE-POINT POSE PROBLEM
397