taining when a luminaire has split into two parts or
components due to vibration:
1. There are too many luminaires present in the
current frame
2. The pixel count for each component is less than
the expected value from the last frame
When the recovery algorithm is called it scans the
grey level comparing the current frame with the pre-
vious frame. At the same time, the pixel count (that
is the number of pixels that constitute a luminaire) is
also compared. When a luminaire splits, the value of
the grey level and the pixel count decreases. Once
the luminaires at fault are identified, the problem is
rectified by summing the associated pixel counts and
grey levels together for the problem luminaire. The
problem luminaire still has multiple locations, due to
being split, so a new position is evaluated using equa-
tion 1. The aforementioned CCA results are updated
accordingly, by decrementing the data one element,
so that the effects of vibration are accounted for and
the actual number of luminaires is consistent with that
of the last frame.
This section has introduced the reader to the NM
algorithm and its basic operation. The following sec-
tion introduces the KLT feature tracker algorithm.
3 KLT TRACKING ALGORITHM
This section introduces the theory behind the Kanade-
Lucus-Tomasi (KLT) algorithm before analysing how
it performs on the synthetic airport lighting model
presented in section 4.
As the camera moves, the platform of image in-
tensities change in a complex way. In general, any
function of three variables I(x,y,t), where the space
variables x and y as well as the time variable t are
discrete and suitably bounded, can represent an im-
age sequence. However, images taken at near time
instants are usually strongly related to each other, be-
cause they refer to the same scene taken from only
slightly different viewpoints.
We usually express this correlation by saying that
there are patterns that move in an image stream. For-
mally, this means that the function I(x,y,t) is not ar-
bitrary, but satisfies the property shown in equation
5.
I(x, y, t + τ) = I(x− ξ,y− η,t); (5)
where, a later image taken at time t + τ can be ob-
tained by moving every point in the current image,
taken at time t, by a suitable amount. The amount
of motion d = (ξ,η) is called the displacement of the
point x=(x,y) between the time instants t and t+ τ, and
is in general a function of x,y,t and τ (Shi and Tomasi,
1994).
An important problem in finding the displacement
d of a point from one frame to the next is that a single
pixel cannot be tracked, unless it has very distinctive
brightness with respect to its neighbours. In fact, the
value of the pixel can both change due to noise, and
be confused with adjacent pixels. As a consequence,
it is often hard or impossible to determine where the
pixel went in the subsequent frame, based only on lo-
cal information. Due to these problems the KLT algo-
rithm doesn’t track single pixels but windows of pix-
els and it looks for windows that contain sufficient
texture. Formally, if we redefine J(x) = I(x,y,t+ τ),
and I(x − d) = I(x − ξ,y − η,t), where the time has
been dropped for brevity, our local image model is
represented by equation 6.
J(x) = I(x− d) + n(x), (6)
where n is noise. The displacement vector d is then
chosen so as to minimise the residue error defined by
the double integral over the given window W shown
in equation 7
ε =
W
[I(x− d) − J(x)]
2
wdx (7)
In this expression, w is a weighting function. In the
simplest case w could be set to 1. Alternatively, w
could be a Gaussian like function, to emphasise the
central area of the window. This is user defined.
Several methods have been proposed to minimise the
residue in equation 7. This paper assumes the lin-
earisation method used when the displacement d is
much smaller than the window size and is detailed in
(Tomasi and Kanade, 1991).
3.1 Adapting the KLT Algorithm
By default the KLT algorithm accepts a series of
Portable Gray Map (PGM) image files as the input
file and outputs a Portable Pixel Map (PPM) results
file. A number of alterations, figure 4, were car-
ried out in order to make the airport lighting images,
in either Audio Video Interleave (AVI) or uncom-
pressed bitmap (BMP) format, compatible with the
KLT tracking algorithm. These alterations allow the
KLT algorithm to accept a BMP, AVI or PGM file
as the input and store the results in a structure, an
AVI video with tracked results superimposed or a se-
quence of PPM image files with tracked results su-
perimposed. A number of other alterations were per-
formed and are highlighted in later sections.
The next section introduces a virtual model of the
approach lighting pattern used to compare and con-
trast the two tracking algorithms covered in sections
4.1 and 4.2.