the images captured by the on board camera is a se-
rious hindrance. Vignetting is a radial drop of image
brightness caused by partial obstruction of light from
the object space to image space, and is usually de-
pendent on the lens aperture size (Nanda and Cutler,
2001) (Kang and Weiss, 2000). These approaches, as
well as (Manders et al., 2004), treat the problem in
terms of its physical origins due to geometrical de-
fects in the optics, and are mostly focused on radio-
metric calibration, i.e. ensuring that the camera re-
sponse to illumination after the calibration conforms
to the principles of homogeneity and superposition.
However, none of the proposed methods deals with
chromatic distortion, as is the case of our reference
platform, and other inexpensive low power CMOS
sensors. Recently, a few papers have attempted to
tackle such problem. These solutions share a simi-
lar approach to minimize the computational costs by
using lookup tables to perform the correction in real
time, while the expensive calibration of the correc-
tion tables is performed off-line. In (Xu, 2004) the
author uses a model based on a parabolic lens geom-
etry, solved through the use of an electric field ap-
proach. No quantitative analysis of the results is pro-
vided, but this technique has been successfully used
in practice by one team of autonomous robots in the
RoboCup Four-Legged League.
1
Another success-
ful technique used in RoboCup has been presented in
(Nistic
`
o and R
¨
ofer, 2006), based on a purely black-
box approach where a polynomial correction function
is estimated from sample images using least square
optimization techniques. Since this approach does not
rely on assumptions concerning the physics of the op-
tical system, we feel that it can be more effective in
dealing with digital distortions such as saturation ef-
fects. Again no quantitative analysis has been pre-
sented, and both papers do not address the problem of
inter-robot camera calibration, which has been treated
in (Lam, 2004) with a simple linear transformation of
the color components considered independently.
2 COLOR MODEL
The first step to understand the characteristics of this
chromatic distortion, was to capture images of special
cards that we printed with uniform colors, illuminat-
ing them with a light as uniform as possible, trying
to avoid shadows and highlights.
2
Then we calcu-
1
RoboCup is an international joint project to promote
AI, robotics, and related fields. http://www.robocup.org/
2
However, this is not so critical, and the use of a profes-
sional diffuse illuminator is not necessary, as our approach
can deal well with noise and disturbances (see Section 2.1).
lated the histograms of the three image spectra, with
a number of bins equal to the number of possible val-
ues that each spectrum can assume, i.e. 256. Under
these conditions, the histograms of such uniform im-
ages should be uni-modal and exhibit a very narrow
distribution around the mode (in the ideal case, such
distribution should have zero variance, i.e. all the pix-
els have exactly the same color) due only to random
noise. Instead, it could be observed that the variance
of the distribution is a function of the color itself; in
case of the U channel, it appears very narrow for cold
/ bluish color cards, and very wide for warm / yel-
lowish cards (Figure 1(a)). Consequently, we model
the chromatic distortion d
i
for a given spectrum i of a
given color I as a function of I
i
itself, which here we
will call brightness component λ
i
(I
i
).
(a) (b)
Figure 1: a) Histograms of the U color band for uniformly
colored images: yellow, green and skyblue. Notice how the
dispersion (due to the vignetting) increases inverse propor-
tionally to the position of the mode. b) Brightness distri-
bution of the U color band for a uniformly yellow colored
image.
The distribution itself is not centered around the
mode, but tends to concentrate mostly on one side
of it. The reason for this becomes apparent by ob-
serving the spatial distribution of the error (cf. Fig-
ure 1(b)); the phenomenon itself is nothing but a ring
shaped blue/dark cast, whose intensity increases pro-
portionally to the distance from the center of the dis-
tortion (u
d
, v
d
), which lies approximately around the
optical center of the image, the principal point. So,
let r =
p
(x − u
d
)
2
+ (y − v
d
)
2
, then we define
the radial component as ρ
i
(r(x, y)). Putting together
brightness and radial components, we obtain our dis-
tortion model:
d
i
(I(x, y)) ∝ ρ
i
(r(x, y)) · λ
i
(I
i
(x, y)) (1)
Now, due to the difficulty to analytically derive
ρ
i
, λ
i
, ∀i ∈ {Y, U, V } about which little is known,
we decided to use a black-box optimization ap-
proach. Both sets of functions are non-linear, and we
chose to approximate them with polynomial functions
REAL-TIME INTER- AND INTRA- CAMERA COLOR MODELING AND CALIBRATION FOR RESOURCE
CONSTRAINED ROBOTIC PLATFORMS
379