minimal case is for m = 3 and k = 3. This gives the
full reconstruction of both points and directions up to
an unknown choice of Euclidean coordinate system
and unknown choice of z-coordinate for the points z
i
.
If the rank is 1, this could be because the direc-
tions are parallel. In this case. Similar to the discus-
sions above we can obtain one of the coordinates of
the positions z
i
, but this is trivial since the measure-
ments D
i, j
are such coordinates by definition.
If the rank is 1 because the points lie on a line,
we obtain a one-parameter family of reconstructions
based on Z = U
1
a and N = aS
1
V
T
1
, where a is an
unknown constant that has to fulfill a ≤ 1/l, where
l = max
j
|S
1
V
1, j
|. For each such a it is possible to ex-
tend the directions n
j
so that they have length one, but
there are several such choices.
2.3 Overdetermined Cases
When more measurements are available than the min-
imal case discussed in the previous section, we need
to solve an overdetermined system in least-square
sense or with robust error measures e.g. L
1
-norm.
Here we focus on the following least-square formu-
lation for the pose problem:
Problem 3. Given measurements D
i, j
, i =
1,. .. , m and j = 1,. ..,k from the antenna at m
different positions to k base stations, determine both
the relative motion of the antenna z
i
and the direction
to the base stations n
j
so that
min
Z,N
||D− Z
T
N||
2
Frob
(1)
s.t. ||n
j
||
2
= 1, j = 1,... ,k.
where ||.||
Frob
denotes the Frobenius norm.
For the over-determined cases, that is m > 4 and
k ≥ 6 or m ≥ 4 and k > 6, it is possible to modify
Algorithm 1 to obtain an efficient but not necessarily
optimal algorithm that finds a reconstruction that fits
the data quite good using the following three mod-
ifications (i) the best rank 3 approximation can still
be found in step 4-5 using the singular value decom-
position, (ii) the estimate of B in step 6 can be per-
formed in a least squares sense and (iii) re-normalize
the columns of N to length 1. This results in a recon-
struction that differs from the measurements, but both
steps are relatively robust to noise. The problem of
B not being positive semi-definite can be attacked by
non-linear optimization. Here we try to optimize A so
that
∑
k
j=1
(n
T
j
A
T
An
j
− 1)
2
is minimized. This can be
achieved e.g. by initializing with A = I and then using
non-linear optimization of the error function.
Clearly, we lose any guarantee on the optimality
of the solution when we enforce the constraints as in
step (iii). However, the solution can serve as a good
initialization for subsequent optimization algorithms
we present in this section. We discuss how to use al-
ternating optimization and Levenberg-Marquardt al-
gorithm (LMA) to obtain better solution. The first
algorithm starts with an initial feasible solution for
Z and N, and then it alternates between optimizing
Z given N and vice versa. The latter is essentially
a method combining Gauss-Newton algorithm and a
gradient descent that improve the solution locally. For
both methods, we need to treat the constraints on the
direction vectors properly to ensure convergence.
2.3.1 Alternating Optimization
In order to find the local minima of Problem 3, we
can use a coordinate descent scheme. Specifically, we
would like to iteratively optimize the cost function in
Problem 3 with respect to Z given N, and then find
the optimal feasible N with fixed Z. If we initialize
N such that it satisfies the norm constraints, we can
easily see that the alternating procedure is converging
(Algorithm 2).
Algorithm 2.
Given the measurement matrix D with m > 4 and k ≥
6 or m ≥ 4 and k > 6,
1. Construct
¯
D and initialize Z and N as in Algo-
rithm 1
2. Fix N , find optimal Z
3. Fix Z, solve the constrained minimization for each
n
j
, j = 1,...,k
4. Repeat (2) and (3) until convergence or predefined
number of iterations is reached
To enable the alternating optimization, we need
to solve two separate optimization problems. The
first one is to find the optimal Z given N. This is
the classic least squares problem and is known to be
convex and can be solved efficiently. On the other
hand, solving for optimal n
j
given Z is not always
convex due to the additional constraints on the n
j
’s.
In this case, we seek the local minima for each n
j
as a constrained minimization problem. We solve the
small constrained problems (3 variables each) inde-
pendently with interior point method. Alternatively,
we can solve the constrained optimization as solv-
ing polynomial equations. This can be related to the
fact that for a given Z, level sets of the cost function
with respect to n
j
are surfaces of a ellipsoid in R
3
(the centers are in this case the solution from singu-
lar value decomposition). The norm 1 constraints on
n
j
geometrically means that the feasible solutions lie
on the unit sphere centered at origin. Therefore, the
optimal solution of n
j
is one of the points that the
UNDERSTANDING TOA AND TDOA NETWORK CALIBRATION USING FAR FIELD APPROXIMATION AS
INITIAL ESTIMATE
593