Figure 4 : Repartition of RGB patches in camera RGB cube, and on RG, RB, GB projection plans for
integration time of 62.5ms (right) and 500ms(left).
Considering that an error variation above 200%
of its lowest level, implies that this error is not
acceptable, for calibration with 170 patches, whether
being under D65 or under halogen illuminant, more
than 50% of the whole patches set have to be kept in
order to get R, G and B errors in this range.
The various results show that only considering a
number of patches is not sufficient. Criterion of
calibration quality also depends on lighting and
chart. Only knowing these conditions allows such a
criterion: for D65 lighting with a Gretag MacBeth
Color DC chart, calibration gives little errors, if the
number of kept patches, is higher than 50% of the
170 original patches.
Other aspects have to be taken into account: if
integration time is low, when considering patches
that are projected outside the restrained represented
volume, committed errors are then very high, as
those patches have not been taken into account when
calibration matrix has been computed.
4 CONCLUSION
Once calibration data are computed, a raw
acquisition of any object can be corrected and
calibrated. Different steps are then required.
First a selection of the integration time is
automatically done by dichotomy. Then, the
calibration matrix corresponding to the nearest
integration time using during calibration step is
selected. Raw RGB acquisition is then
accomplished. Calibration steps remain: correction
for noise, for non-uniformity and for white balance
following equation (4) is performed, then a linear
transformation of the data is done (to make
integration time used during acquisition and the one
used during calibration correspond). Calibration
matrix can next be applied, followed by the inverse
linear transformation.
We have carried out a calibration method, with
all required step, and tested quality of this
calibration in function of integration time. Our final
calibrated images show very good results. Further
works could be applied on calibration quality.
REFERENCES
Hardeberg, J. Y., Brettel, H., Schmitt, F. (2000) ‘Spectral
Characterisation of Electronic Cameras’, Proc.
EUROPTO Conference on Electronic Imaging, SPIE,
pp.36-46.
Orava, J., Jaaskelainen, T., Parkkinen, J. (2004) ‘Color
errors of Digital Cameras’, Color Research and
Application, vol.29, issue 3, June, pp.217-221.
Paulus, D., Hornegger, J., Csink, L. (2002) ‘Linear
Approximation of Sensitivity Curve Calibration’, The
5th Workshop Farbbildverarbeitung, Ilmeneau, pp.3-
10.
Quan, S., Ohta, N., Jiang, X. (2003), ‘A Comparative
Study on Sensor Spectral Sensitivity Estimation’,
SPIE-IS&T Electronic Imaging, SanJose, vol.5008,
pp.209-220.
Vora, P.L., Farell, J.E., Tietz, J.D., Brainard, D. (1997)
Digital Color Cameras, Hewlett Packard Company
Technical Report, March, 1997.
Solli, M., (2004) Filter characterization in digital
cameras, Linkönig university Electronic Press.
VISAPP 2007 - International Conference on Computer Vision Theory and Applications
160