With regard to visual errors there is a significant
influence of vignetting which is also clearly visible
from the original images (Fig. 2, right). Note that the
image differences resulting from these effects remain
unchanged while compensating for lens distortions as
indicated by the final ratios of 65% and 4% after the
second registration run. This is as expected since
moderate vignetting usually does not affect the reg-
istration process, but merely the final visual appear-
ance of the images. Consequently, individual correc-
tion steps are required, but not carried out in this case.
6 CONCLUSION
An objective assessment of 2D image registration
quality is a challenging task. As common measures
for image quality have proven not to be suitable for
the special requirements of errors in image alignment,
a new metric for this is proposed. Promising results
are obtained by exploiting local structural proper-
ties of registered images and preserving this informa-
tion in error pooling by applying voting-based strate-
gies. The indicated registration quality correlates well
with the visual appearance of the images and various
classes of differences can be distinguished. This capa-
bility is of significant importance with regard to sub-
sequent processing steps that aim at an automatic im-
provement of the results, since different error sources
require individual compensation strategies.
While the obtained results outline the high poten-
tial of this approach, perspectives for further refine-
ments were also discovered. Sometimes the distri-
bution of blocks voting for registration errors do not
clearly indicate the underlying error sources. We plan
to tackle this problem by refining the spatial classifi-
cation of the blocks and by taking global patterns into
account. In addition, with regard to local differences
resulting from moving objects or parallax detailed ex-
aminations of extraordinary high intensity differences
will be carried out. Finally, so far the approach relies
on various manually adjusted thresholds. Presumably
these can be chosen appropriately according to actual
image contents, leading to a fully automatic and flex-
ible approach for registration quality assessment.
ACKNOWLEDGEMENTS
This work was supported by a fellowship within the
Postdoc-Programme of the German Academic Ex-
change Service (DAAD), and has also been partially
funded through the MOMARNET EU Research and
Training Network project (MRTN-CT-2004-505026),
and by the Spanish Ministry of Education and Science
under grant CTM2004-04205.
REFERENCES
Cadik, M. and Slavik, P. (2004). Evaluation of two princi-
pal approaches to objective image quality assessment.
In Proceedings of the 8th International Conference on
Information Visualisation (IV), pages 513 – 518.
Cvejic, N., Loza, A., Bull, D., and Canagarajah, N. (2005).
A novel metric for performance evaluation of im-
age fusion algorithms. Transactions on Engineering,
Computing and Technology, 7:80–85.
Daly, S. (1993). The visible difference predictor: An al-
gorithm for the assessment of image fidelity. Digital
Images and Human Vision, pages 179–206.
Farin, D. and de With, P. (2005). Misregistration errors in
change detection algorithms and how to avoid them.
In Proc. of IEEE International Conference on Image
Processing, pages II:438–441.
Hartley, R. and Zisserman, A. (2004). Multiple View Geom-
etry in Computer Vision. Cambridge University Press.
Hsu, S. C. and Sawhney, H. S. (1998). Influence of global
constraints and lens distortion on pose and appear-
ance recovery from a purely rotating camera. In 4th
IEEE Workshop on Applications of Computer Vision
(WACV), page 154 ff., Princeton, NJ, USA.
Kim, H.-S., Kim, H.-C., Lee, W.-K., and Kim, C.-H. (2000).
Stitching reliability for estimating camera focal length
in panoramic image mosaicing. In Int. Conf. on Pat-
tern Recognition, pages 1:596–599, Barcelona, Spain.
Lubin, J. (1995). A visual discrimination model for imag-
ing system design and evaluation. In Peli, E., editor,
Visual Models for Target Detection and Recognition,
pages 245–283. World Scientific, Singapore.
Mann, S. and Picard, R. (1996). Video orbits of the pro-
jective group: A new perspective on image mosaicing.
Technical Report 338, MIT Media Laboratory Percep-
tual Computing Section, Boston, USA.
Qu, G., Zhang, D., and Yan, P. (2002). Information measure
for performance of image fusion. Electronics Letters,
38(7):313–315.
Wang, Z., Bovik, A., Sheikh, H., and Simoncelli, E. (2004).
Image quality assessment: From error visibility to
structural similarity. IEEE Trans. on Image Process-
ing, 13(4):600–612.
Wang, Z. and Bovik, A. C. (2002). A universal image qual-
ity index. IEEE Signal Proc. Letters, 9(3):81–84.
Weng, J., Huang, S., and Ahuja, N. (1989). Motion and
structure from two perspective views: Algorithms, er-
ror analysis, and error estimation. IEEE Trans. on
Patt. Anal. and Mach. Intel., 11(5):451–476.
Xydeas, C. and Petrovi
´
c, V. (2000). Objective image fusion
performance measure. Electr. Letters, 36(4):308–309.
Zitov
´
a, B. and Flusser, J. (2003). Image registration
methods: a survey. Image and Vision Computing,
21(11):977–1000.