a
b
Figure 2: Color divergence as a typical defect in multichan-
nel image systems: a) ideal convergence; b) edge smearing
caused by color divergence.
the estimation of the color of object in images with an
illumination by light of unknown color characteristics
((Barnard et al., 1997; Kobus, 2002; Verges-Llahi and
Sanfeliu, 2003; Ebner, 2003)). In this paper we con-
sider how the restoration system is trained by suitable
test patterns. Normally, additional constraints must
be introduced to avoid a too strong dependence of the
correction result on the selected test patterns. Oth-
erwise a correct color reproduction for random input
images is not given. It is clear that the restorations
system must not induce additional color errors, but if
required it should compensate color errors of the im-
age forming system instead.
In many investigations of image restoration (Gon-
zalez and Woods, 1993; Andrews, 1977; Zheng
and Hellwich, 2007) the process of image formation
and restoration is treated by a system as shown in
Fig. 3. We generalize the considerations for a multi-
dimensional system with continuous coordinates~x =
x
1
... x
K
T
and C channels (e.g. colors). This
scheme is very similar to approaches in system the-
ory (Unbehauen, 1970; K¨upfm¨uller, 1949). (Jahn and
Reulke, 1995) applies system theory directly to opti-
cal sensors. As an initial assumption, the characteris-
f(~x)
ˆ
f(~x)
g(~x)
n(~x)
h(~x,
~
ξ) w(~x,
~
ξ)
Figure 3: Basic model of image forming and restoration.
tics of the correction system are opposite or ”inverse”
to those of the degrading system - provided the cor-
rected image approximates the input image as far as
possible. Such inverse problems are in general ”ill-
posed”. In traditional designs, additional constraints
for a reasonable and stable solution are introduced
(Gonzalez and Woods, 1993). Well- known image
restoration techniques such as the Wiener or Inverse
Filtering methods are available (Stearns and Hush,
1999). Such techniques estimating optimal correc-
tion systems are also called deconvolution (Gull and
Daniell, 1978; Andrews, 1977; Zheng and Hellwich,
2007). However, deconvolution requires knowledge
of system parameters such as noise impact or point
spread function that have to be measured or estimated
in advance. Furthermore, other image degradations,
namely, geometrical distortions, space-varianceof pa-
rameters and unknown errors require additional cor-
rection methods.
In a K-dimensional
1
image forming system with C
channels (colors for instance), channel c has the illu-
mination distribution g
c
(~x) resulting by summing up
the K-ply integrals of the object illumination distri-
bution channels f
c
(~x) above the pulse response of the
the image formation system between channels c and
q, h
c,q
(~x,
~
ξ), also called cross point spread function
(PSF), and superposition with channel specific noise
function n
c
(~x):
g
c
(~x) =
C
∑
q=1
∞
R
−∞
.. .
∞
R
−∞
h
c,q
~x,
~
ξ
f
q
~
ξ
dξ
0
.. . dξ
K−1
+ n
c
(~x)
(1)
with
~x =
x
1
... x
K
T
the vector of continuous
coordinates. The continuous vector of local coor-
dinates
~
ξ =
ξ
1
... ξ
K
T
enables us to model
space variance of the PSF. Geometric distortions are
usually modeled by coordinate transforms and also
covered by Eq. 1.
This equation changes to a simple convolution if
the pulse response of the system to be corrected can be
considered stationary (space invariant, see (Andrews,
1977)). (Andrews, 1977) defines image restoration as
to determine the original object distribution f given
the recorded image g and knowledge about the point-
spread-function h. Approaches that compensate for
a convolution of the original by the PSF are often
called image deconvolution (Gonzalez and Woods,
1993; Andrews, 1977). The task of image restoration
requires therefore the determination of a system with
the pulse response w(~x,
~
ξ) which produces an output
ˆ
f(~x) approximating the input f(~x).
Considering pixel-based image forming devices
with images of limited extent leads to a discrete, alge-
braic representation of the system which is shown in
Fig. 4a). Multi-dimensional image data is vectorized
to form image vectors . The length of these vectors is
the product of numbers of pixels in each dimension by
the number of channels. As an example, let us con-
sider the pixel values of an original image f
l
1
,···,l
K
,c
where l
1
···l
K
are the pixel indexes in the K dimen-
sions and c is the index of the image channel. This
image is described by object vector
~
f which is ob-
tained by vectorization of the input image pixels in
1
We generalize our approach for multidimensional im-
age systems with any number of channels
MULTI-ERROR CORRECTION OF IMAGE FORMING SYSTEMS BY TRAINING SAMPLES MAINTAINING
COLORS
153