from candidates to the previous tracking window.
Since all of particles can point back to the target
window in different ways, it is hard to tell which
particle is the most possible one without any
predefined knowledge of the image environment. In
this paper, we simply assume that
all , i = 1,2,…,m, are equal.
However this assumption may not hold in some
practical applications, for instance a mobile vision
system where the previous motion trajectory of the
mobile platform would provide more information for
the back projection, which will be investigated in our
future work.
),|,(
iigg
VHVHp
Considering that the PSO-based searching
algorithm returns all of candidates which are good
enough in appearance histogram, it is reasonable to
ignore the histogram here and simplify (7) as:
)()|(
igi
VcpVVp = , (8)
where c is a positive constant factor, and
represents the probability of a particle on the
motion trajectory. According to the inertia of motion,
depends on the distance between and .
The closer two vectors are, the higher the possibility
of the corresponding particle, which makes (8) as the
following equation:
)(
i
Vp
)(
i
Vp
i
V
g
V
),(
)()|(
gi
igi
VVD
k
VcpVVp ==
(9)
where k is a positive factor. If two vectors are
shifted to the same original point, the distance
between two vectors turns into the distance between
two points, where Euclidean distance can be
calculated.
5 EXPERIMENTAL RESULTS
To evaluate the proposed algorithm, some video clips
from PETS database are applied in this paper. The
program is written in C++ using OPENCV library,
running on a Pentium4 desktop. Most data come
from a surveillance system with a stationary camera.
Figure 2 shows the process of identifying moving
objects by motion detection, where pictures from left
to right are true data, foregrounds, and backgrounds,
respectively. If there is no moving object, as shown
in Figure 2(a), the background is the same with true
image and the foreground is empty since no object is
detected. With some general preprocessing, the noise
can be depressed and the model of background can
be enhanced. When a car drives in, it is detected and
recognized as an object. As shown in Figure 2(b), a
car shape appears in the foreground while the
background keeps the same with the true image. For
most testing data with static background, motion
detection can detect moving objects quickly. For
those testing data under dynamic environment, some
pre-knowledge of objects, such as moving behaviors,
would help to improve the detection performance.
Figure 3 shows the procedure of the proposed
PSO algorithm searching for candidate windows. A
number of particles are distributed around the target
according to the tracking window of previous frame
in Figure 3(a). Due to the uncertainty of the object
movement, initially, these windows are set up as
different sizes and locations near the detected object
using motion detection. Then particles start to move
around and eventually converge to some optimal
points under PSO rules. Figure 3(b) shows these
optimal points, which are good candidates of tracking
windows. As shown in Figure 3(b), it is obviously
that these candidate windows are much closer to the
car compared with those initial windows in Figure
3(a), which demonstrates the efficiency of the PSO-
based searching algorithm. Then Bayers filter is
applied to select the best match from those good
candidates, as shown in Figure 3(c). Usually, the
PSO-based searching algorithm converges quickly.
In our experiments, initially 20 windows are
generated, then after 10 to 15 time steps, those
windows cluster to the object.
To evaluate the robustness of the proposed
tracking method under occlusion, another experiment
is carried out as shown in Figure 4. First, a white car
drives in and is detected as the target by a blue
rectangle window as shown in Figure 4(a). Then, the
white car traverses the scene and is occluded by a
block of texts in the image, as shown in Figure 4(b)
and (c). During the occlusion, the tracking window
changes with scenes, but still tracks the car. As
shown in Figure 4(b), when the car starts moving into
the block, the tracking has almost the same size with
the one in Figure 4(a). Under the influence of the
block, the tracking window shifts a little and shrinks.
But the object is still locked. When the car moves
away as shown in Figure 4(d), the window becomes
smaller until disappeared. It can be seen that the
tracker can still lock the object under occlusion.
The above experiments demonstrate the proposed
algorithm is efficient and robust. However under
some complex situations, such as dynamic
background, more robust motion detection is
required. For some noisy videos, the tracking
window may be lost due to frame skips. A recovery
algorithm may need to increase the system reliability.
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
106