or fast changes caused by passing clouds. In order to be able to deal with such un-
stable lighting conditions, this paper presents an unsupervised, self-adapting approach
for color segmentation, comprising two steps: The first step detects and initializes the
color regions of interest. The second step tracks these regions iteratively during run-
time. The approach is, as already mentioned, completely unsupervised as the user is not
confronted with parameter adjustment at all. The only information required a-priori is
a very rough estimation of the problem-specific color regions of interest in the color
space.
The remainder of the paper is organized as follows. Section 2 discusses related
work facing similar challenges or applying similar techniques. In section 3 the overall
approach is introduced and the algorithms for each step are presented in detail. Section
4 presents the results of the experimental evaluation. The last section summarizes the
main contributions of the paper and hints at future work.
2 Related Work
In many industrial and research applications, color provides a strong clue for object
recognition to be performed by robotic vision systems. Therefore, camera calibration
and color indexing are important topics in robotics research. One of the key challenges
for such systems is the ability to cope with changing lighting conditions.
In [6], Mayer et al. present a case study which discusses various lighting conditions,
ranging from artificial to natural light, and their effect for image processing and vision
routines. As a result, they conclude that dynamic approaches for color segmentation
are required under these conditions. J
¨
ungel et al. [5, 4] describe a calibration approach
which initially looks for regions of a reference color (e.g. green as in the RoboCup 4-
Legged League which is used as example scenario) applying simple heuristics. Based
on these regions, the regions of the remaining predefined colors are determined in the
YUV color space, maintaining their relative placement. However, this approach can
be considered as risky, since relative distances of the color regions of interest are not
constant and may be stretched by changes in illumination [6]. A somewhat different
approach is presented by G
¨
onner et al. in [2]. They calculate chrominance histograms
representing the frequency of color values of specific objects. The relative frequency
of the color values corresponds to the conditional a-priori probability of a certain color
value, assuming a certain object is present. The a-posteriori probability of a color value
being assigned to some specific object is derived from a Bayesian combination of these
chrominance histograms. However, for creating the initial a-priori probability distribu-
tion the approach relies on elaborate object recognition mechanisms. Very similar to our
approach, a contribution by Anzani et al. [1] describes a method for initial estimation of
color regions and their tracking to cope with changes in illumination. The color regions
are represented as a mixture of 3D-Gaussians (ellipsoids) in the HSV color space. The
tracking of color regions is realized by applying the EM algorithm. In contrast to our
approach, however, this method has to deal with the problem of determining the optimal
number of ellipsoids representing the color region, in order to avoid overfitting of noise
and to prevent a too rough representation. In our approach, noise elimination mecha-
nisms are integrated for the initialization step and also for the tracking. This allows us
4