3.2 Follow-up Refinement of Initial Pose
with Amplitude Image
In the paper of Klionovska et al. (Klionovska and
Benninghoff, 2017), we propose the algorithm which
we used in order to acquire the pose of the non-
cooperative target using the depth image of PMD sen-
sor and known 3D model. In that paper, we have dis-
covered differednet things: the use of a proper shape
(a frontal hexagon and a ”nose”) of the 3D model is
a prerequisite for the correct work of the algorithm;
the determination of the attitude of the target using
only point cloud from the depth image is a deman-
ding problem. Specifically, the determination of the
target’s rotation around its principal axis of inertia (in
Figure 1 (left) is an axis x
B
) only with the 3D point
cloud depth data can lead to misalignments up to 30
degrees. The other rotational components can also be
affected. Since it is preferable to have an accurate
initial guess for the tracker in order to navigate to the
target in a frame-to-frame mode, we propose an initial
pose refinement with the amplitude image.
In the work of Klionovska et al. (Klionovska et al.,
2018) we presented for the first time a navigation sy-
stem which uses depth and amplitude images from the
PMD sensor. We have shown that the use of ampli-
tude image along with the depth image for the pose
estimation leads to stable tracking, since the ampli-
tude information can be considered as a redundant
and let us calculate a pose when the depth algorithm
fails or gives wrong measurements. Moreover, it was
shown that (partly) lost distance information of the
target from the depth images is still present in the
amplitude images. It means that with the amplitude
image we can get a more complete representation of
the imaging target, consequently, more accurate esti-
mation of the pose. And finally, the model-base pose
estimation technique with the 2D amplitude image de-
monstrates more accurate estimation of the attitude of
the target in comparison with the 3D pose estimation
technique.
Having analyzed the pros of using the amplitude
image for the pose estimation during the tracking, we
have decided to apply it as a supplement processing
for the enhancement of the initial pose. We assume
to have the essential estimated pose of the target af-
ter pose initialization technique with the depth image.
It means that the proposed technique with the ampli-
tude image has already kind of a guess pose as an
input, which is a necessary prerequisite for the cho-
sen improvement technique. For the initial pose re-
finement, we are going to apply an image processing
technique based on the line detection procedure with
Hough Line Transform. The detected straight lines,
namely the end points of that lines, will be the feature
points in order to get the pose by solving 3D-2D pro-
blem. Throughout variety of the solvers (Sharma and
D’Amico, 2016), here we propose to take a Gauss-
Newton solver based on a least square minimization
problem (Nocedal and Wright, 2006) (Cropp, 2001)
in order to estimate the pose of the target related to
the camera frame. The Gauss-Newton solver iterati-
vely solves perspective projection equations with the
known first guess. Let us consider the image proces-
sing technique and Gauss-Newton solver.
3.2.1 Image Processing
Since we are able to estimate the initial pose only
from the front side of the mockup, the visible front
hexagon is defined as an appropriate feature. The hex-
agon is constructed with six straight lines, which are
completely observable if the target is in the FOV of
the camera. The image processing pipeline in order
to detect straight lines has follow steps (HoughLine-
Transform, 2009):
• Use of low-pass filtering to reduce image noise
• Execution of Canny-edge operator (Canny, 1986)
for the edge extraction in the amplitude images
• Employment of Probabalistic Hough Line Trans-
form for finite lines detection
The straight lines give us also the end points,
which are assumed to be the detected features. Kno-
wing the initial pose defined by the depth image, cal-
led guess pose T
guess
, and calibration matrix A of the
PMD sensor, the 3D model can be re-projected onto
the image plane, see Figure 5 (left). The calibration
matrix A is presented by
A =
α γ u
0
0 β v
0
0 0 1
(1)
and includes the following parameters: focal lengths
α and β, coordinates of the principal point (u
0
,v
0
) and
a skew factor γ between x and y axis. We determi-
ned the calibration matrix of the current PMD sensor
in the paper of Klionovska et al. (Klionovska et al.,
2017) using DLR CalDe and DLR CalLab Calibration
Toolbox. After 3D model re-projection, there are two
sets of points in the image: detected feature points
from the image and re-projected points of the 3D mo-
del. By finding nearest neighbors between them a list
of feature correspondences, as in Figure 5 (right), can
be generated.
3.2.2 Gauss-Newton Solver
The following step is to calculate the pose of the spa-
cecraft with respect to the known 3D-2D feature cor-
VISAPP 2019 - 14th International Conference on Computer Vision Theory and Applications
148