Detection models everything happening after the
light passed through the optics.
Non-Linearities occur mostly during integration
of the received light. We model this by a
non-linear photon response curve described in
Sec. 4.4.
Fixed Pattern Noise (FPN) occurs due to chip-
internal different signal propagation times as
well as slightly different gains for each pixel.
This is modeled in Sec. 4.5.
Background Illumination can have a significant
effect on the recorded signal. It will be ignored
in this paper as it is negligible in our case, but
we discuss it in Sec. 8.2.
2 RELATED WORK
Monocular camera calibration can be considered a
“solved problem” with ready-to-use calibration rou-
tines, e.g., in the library (OpenCV, 2014).
All ToF calibration related papers we found, have
in common that they calculate an a-priori offset using
a formula such as (4b) and compensate the deviation
of this value towards a ground-truth value using vari-
ous methods. This means, unlike us, they only use ∆t
not A as input to correct the measured distances.
(Kahlmann et al., 2006) used a grid of active NIR
LEDs for intrinsic camera calibration. They then used
a high accuracy distance measurement track line to
obtain ground-truth distances and median-filter these
measurements to generate a look-up table depending
on integration time and distance to compensate devi-
ations from the ground-truth. They already observed
the influence of self-induced temperature changes on
the measured distances.
(Lindner and Kolb, 2006) fit the deviation to a
ground-truth distance using a global B-spline. This
was then refined using a linear per-pixel fit.
Both (Fuchs and May, 2008; Fuchs and Hirzinger,
2008) attached their camera to an industrial robot
to get ground-truth distances towards a calibration
plane. They fit the deviation to the ground-truth us-
ing a set of splines depending on the distance with
different splines for different amplitude ranges.
In our opinion, neither of the above methods is
capable of properly modeling non-linearities and will
suffer decreasing accuracy as soon as different inte-
gration times or large variances in reflectivity occur.
Furthermore, they are not able to automatically detect
saturation effects (although we assume they might de-
tect them a-priori) or calculate HDR images from a set
of images with different integration times.
In other related work, (Mure-Dubois and Hügli,
2007) propose a way to compensate lens scattering
by modeling a point spread function (PSF) as a sum of
Gaussians. In general, we will use a similar approach.
However, we are capable of making more precise cal-
ibration measurements using HDR images and we use
a combination of many more Gaussian kernels, which
we calibrate exploiting the linearity of the scattering
effect.
The flying pixel effect has often been addressed,
e.g., (Sabov and Krüger, 2010) present different ap-
proaches to identify and correct flying pixels. Also,
(Fuchs and May, 2008) give a simple method to filter
out “jump edges”.
Finally, multi-path reflections are handled by
(Jimenez et al., 2012). Their iterative compensation
method is very computationally expensive and goes
beyond the scope of this paper.
3 INTRINSIC AND EXTRINSIC
MONOCULAR CALIBRATION
In order to calibrate the actual sensor, we first cali-
brated the camera intrinsics and obtained ground truth
positions of the camera relative to a checkerboard tar-
get. All operations are solely based on amplitude im-
ages obtained from the camera as with an ordinary
camera. Amplitudes are calculated as magnitude from
real and imaginary part according to the simple model
(4a).
As monocular camera calibration is a mature and
reliable procedure we use the result a) for the monoc-
ular intrinsic part of our model that maps 3D points
to pixels b) to obtain distance values for each pixel to
calibrate the depth related part of the model and c) as
ground-truth for evaluating the results in Sec. 5. We
believe that using the same technique for b) and c)
is valid since the technique is highly trustworthy and
applied to different images.
3.1 Experimental Setup
Determining intrinsic camera parameters can be con-
sidered a standard problem. It is commonly solved
by checkerboard calibration, e.g., using the routines
provided by (OpenCV, 2014). Having a wide-angle,
low resolution camera requires an increased number
of corner measurements to get an adequate precision.
To achieve this with reasonable effort, we attached
the camera to a pan-tilt-unit (PTU) mounted on a tri-
pod (see Fig. 2). We use a 1 m × 1 m checkerboard
with 0.125 m grid distance, resulting in 8 × 8 corner
points per image. Using the PTU helps to partially
ICINCO2014-11thInternationalConferenceonInformaticsinControl,AutomationandRobotics
570