information theoretic criterion to select the true
model.
Table 1: The model library used for image registration
(Szeliski, 2006).
This paper is organized as follows. We first
discuss some of statistical model selection criteria
for computer vision applications in section 1.1.
Next, we describe our model-based image
registration method in section 2. An important
component of the proposed registration method is
model selection. We evaluate a number of different
model selection criteria for image registration
application in section 3. We show that CAIC and
GBIC outperform the other statistical criteria.
Section 4 is dedicated to making panoramic images,
and in section 5 we present our conclusion.
1.1 Model Selection Criteria and their
Use in Image Registration
Model selection criteria allow choosing the true
model by establishing a trade-off between “fidelity”
and the “complexity” of that model. Because the
most complex (highest order) model always fit the
data better than any other model, it has more degrees
of freedom.
In this paper, we propose to use a model selection
criterion to detect the true transformation model for
registering a pair of images. If we use a more
general model than the true model (over-fit), we
allow noise and outliers to affect parameters
estimation more severely. This is because having
more degrees of freedom gives enough flexibility to
the model to bend and twist itself and consequently
fits to noise and outliers. In contrast, having a less
general model (than the correct model), will result in
under fitting. Under fitting has the danger of
rejecting inliers as being outlier and so disregarding
important information.
These model selection criteria score a model
based on two terms. That is the accuracy of the fit
(fidelity) that is usually the logarithmic likelihood of
the estimated parameters of the model. This
likelihood is equal to the scaled sum of squared
residuals, providing noise is Gaussian. The term
scoring the complexity is a penalty term for higher
order models so that the criterion always avoids
choosing the most general model.
Akaike perhaps was the first to introduce a
model selection criterion known as AIC (Akaike,
1974). The main idea behind AIC is the fact that the
correct model can sufficiently fit any future data
with the same distribution as the current data. AIC
has been modified in many ways. For example,
many model selection criteria including CAIC
(Bozdogan, Model selection and Akaike's
Information Criterion (AIC): The general theory and
its analytical extensions, 1987), GAIC (Kanatani,
Model selection for geometric inference, 2002), and
GIC (Torr, 1999) are derived from AIC.
Later, in 1978, Rissanen introduced MDL
(Rissanen, Modeling by shortest data description,
1978). The underlying logic of MDL is that the
simplest model that sufficiently describes the data is
the best model. Kanatani derived GMDL (Kanatani,
Model selection for geometric inference, 2002),
which has a very similar logic to MDL, specifically
for geometric fitting.
Another group of model selection criteria is
based on Bayesian rules such as GBIC (Chickering
& Heckerman, 1997). They choose the model that
maximizes the conditional probability of describing
a data set by a model.
Cost functions of the aforementioned criteria and
two other model selection criteria Mallow CP
(Mallows, 1973) and SSD (Rissanen, Universal
coding, information, prediction, and estimation,
1984) are shown in
Table 2. A more complete survey
on different available model selection criteria can be
found in (Gheissari & Bab-Hadiashar, Model
Selection Criteria in Computer Vision: Are They
Different?, 2003).
There have been a few papers; such as (Bhat, et
al., 2006), in the image registration literature
concerned about choosing the true transformation
model between two images. However, they use a
heuristic approach to decide whether the
transformation model is a simple homography or the
fundamental matrix.
MODEL BASED GLOBAL IMAGE REGISTRATION
441