NONRIGID OBJECT SEGMENTATION AND OCCLUSION
DETECTION IN IMAGE SEQUENCES
Ketut Fundana, Niels Chr. Overgaard, Anders Heyden
Applied Mathematics Group, School of Technology and Society, Malm¨o University, SE-205 06 Malm¨o, Sweden
David Gustavsson, Mads Nielsen
DIKU, Copenhagen University, DK-2100 Copenhagen, Denmark
Keywords:
Segmentation, occlusion, image sequences, variational active contour, variational contour matching
Abstract:
We address the problem of nonrigid object segmentation in image sequences in the presence of occlusions.
The proposed variational segmentation method is based on a region-based active contour of the Chan-Vese
model augmented with a frame-to-frame interaction term as a shape prior. The interaction term is constructed
to be pose-invariant by minimizing over a group of transformations and to allow moderate deformation in
the shape of the contour. The segmentation method is then coupled with a novel variational contour matching
formulation between two consecutive contours which gives a mapping of the intensities from the interior of the
previous contour to the next. With this information occlusions can be detected and located using deviations
from predicted intensities and the missing intensities in the occluded regions can be reconstructed. After
reconstructing the occluded regions in the novel image, the segmentation can then be improved. Experimental
results on synthetic and real image sequences are shown.
1 INTRODUCTION
Object segmentation is one of the most important pro-
cesses in computer vision which aims at extracting
the object of interests lying in the image. This is
a very difficult process since the object of interests
could be diverse, complex and the understanding on
them vary according to each individual. The process
becomes more difficult when the objects to be seg-
mented are moving and nonrigid and even more when
occlusions appear. The shape of nonrigid, moving ob-
jects may vary a lot along image sequences due to, for
instance, deformations or occlusions, which puts ad-
ditional constraints on the segmentation process.
Numerous methods have been proposed and ap-
plied to this problem. Active contours are powerful
methods for image segmentation; either boundary-
based such as geodesic active contours (Caselles
et al., 1997), or region-based such as Chan-Vese mod-
els (Chan and Vese, 2001), which are formulated as
variational problems. Those variational formulations
perform quite well and have often been applied based
on level sets. Active contour based segmentation
methods often fail due to noise, clutter and occlu-
sion. In order to make the segmentation process ro-
bust against these effects, shape priors have been pro-
posed to be incorporated into the segmentation pro-
cess, such as in (Chan and Zhu, 2005; Cremers et al.,
2003; Cremers and Soatto, 2003; Cremers and Funka-
Lea, 2005; Rousson and Paragios, 2002; Leventon
et al., ; Bresson et al., 2006; Tsai et al., 2003; Chen
et al., 2002). However, major occlusions is still a
big problem. In order to improve the robustness of
the segmentation methods in the presence of occlu-
sions, it is necessary to detect and locate the occlu-
sions (Strecha et al., 2004; Gentile et al., 2004; Kon-
rad and Ristivojevic, 2003). Then using this informa-
tion, the segmentation can be improved. For exam-
ple, (Thiruvenkadam et al., 2007) proposed that the
spatial order information in the image model is used
to impose dynamically shape prior constraints only to
occluded boundaries.
This paper focuses on the region-based variational
approach to segment a non-rigid object in image se-
quences that may be partially occluded. We propose
and analyze a novel variational segmentation method
for image sequences, that can both deal with shape
deformations and at the same time is robust to noise,
clutter and occlusions. The proposed method is based
on minimizing an energy functional containing the
standard Chan-Vese functional as one part and a term
211
Fundana K., Chr. Overgaard N., Heyden A., Gustavsson D. and Nielsen M. (2008).
NONRIGID OBJECT SEGMENTATION AND OCCLUSION DETECTION IN IMAGE SEQUENCES.
In Proceedings of the Third International Conference on Computer Vision Theory and Applications, pages 211-218
DOI: 10.5220/0001076102110218
Copyright
c
SciTePress
that penalizes the deviation from the previous shape
as a second part. The second part of the functional
is based on a transformed distance map to the pre-
vious contour, where different transformation groups,
such as Euclidean, similarity or affine, can be used de-
pending on the particular application. This variational
framework is then augmented with a novel contour
flow algorithm, giving a mapping of the intensities
inside the contour of one image to the inside of the
contour in the next image. Using this mapping, oc-
clusions can be detected and located by simply thresh-
olding the differencebetween the transformed intensi-
ties and the observed ones in the novel image. By us-
ing occlusions information, the occluded regions are
reconstructed to improve the segmentation results.
2 SEGMENTATION OF IMAGE
SEQUENCES
In this section, we describe the region-based segmen-
tation model of Chan-Vese(Chan and Vese, 2001) and
a variational model for updating segmentation results
from one frame to the next in an image sequence.
2.1 Region-Based Segmentation
The idea of the Chan-Vese model (Chan and Vese,
2001) is to find a contour Γ such that the image I
is optimally approximated by a gray scale value µ
int
on int(Γ), the inside of Γ, and by another gray scale
value µ
ext
on ext(Γ), the outside of Γ. The optimal
contour Γ
is defined as the solution of the variational
problem,
E
CV
(Γ
) = min
Γ
E
CV
(Γ), (1)
where E
CV
is the Chan-Vese functional,
E
CV
(µ,Γ) = α|Γ|+β
1
2
Z
int(Γ)
(I(x) µ
int
)
2
dx+
+
1
2
Z
ext(Γ)
(I(x) µ
ext
)
2
dx
.
(2)
Here |Γ| is the arc length of the contour, α,β > 0
are weight parameters, and
µ
int
= µ
int
(Γ) =
1
|int(Γ)|
Z
int(Γ)
I(x)dx, (3)
µ
ext
= µ
ext
(Γ) =
1
|ext(Γ)|
Z
ext(Γ)
I(x)dx. (4)
The gradient descent flow for the problem of min-
imizing a functional E
CV
(Γ) is the solution to initial
value problem:
d
dt
Γ(t) = E
CV
(Γ(t)), Γ(0) = Γ
0
, (5)
where Γ
0
is an initial contour. Here E
CV
(Γ) is
the L
2
-gradient of the energy functional E
CV
(Γ), cf.
e.g. (Solem and Overgaard, 2005) for definitions of
these notions. Then the L
2
-gradient of E
CV
is
E
CV
(Γ) = ακ+β
1
2
(Iµ
int
(Γ))
2
1
2
(Iµ
ext
(Γ))
2
,
(6)
where κ is the curvature.
In the level set framework (Osher and Fedkiw,
2003), a curve evolution, t 7→ Γ(t), can be represented
by a time dependent level set function φ : R
2
×R R
as Γ(t) = {x R
2
; φ(x,t) = 0}, φ(x) < 0 and φ(x) >
0 are the regions inside and the outside of Γ, respec-
tively. The normal velocity of t 7→ Γ(t) is the scalar
function dΓ/dt defined by
d
dt
Γ(t)(x) :=
∂φ(x,t)/t
|∇φ(x,t)|
(x Γ(t)) . (7)
Recall that the outward unit normal n and the curva-
ture κ can be expressed in terms of φ as n = ∇φ/|∇φ|
and κ = ·
∇φ/|∇φ|
.
Combined with the definition of gradient descent
evolutions (5) and the formula for the normal velocity
(7) this gives the gradient descent procedure in the
level set framework:
∂φ
t
=
ακ+β
1
2
(Iµ
int
(Γ))
2
1
2
(Iµ
ext
(Γ))
2
|∇φ|,
where φ(x,0) = φ
0
(x) represents the initial contour
Γ
0
.
2.2 The Interaction Term
The interaction E
I
(Γ
0
,Γ) between a fixed contour Γ
0
and an active contour Γ may be regarded as a shape
prior and be chosen in several different ways, such
as the area of the symmetric difference of the sets
int(Γ) and int(Γ
0
), cf. (Chan and Zhu, 2005), and the
pseudo-distances, cf. (Cremers and Soatto, 2003).
Let φ = φ(x) and φ
0
= φ
0
(x) denote the signed
distance functions associated with Γ and Γ
0
, respec-
tively, where x is a generic point in the image domain
R. By assuming that Γ
0
is already optimally aligned
with Γ in the appropriate sense, then the interaction
term proposed in this paper has the form:
E
I
(Γ,Γ
0
) =
Z
int(Γ)
φ
0
(x)dx . (8)
The area of the symmetric difference, which has been
used in (Chan and Zhu, 2005) and (Riklin-Raviv et al.,
2007) has the form:
E
SD
I
(Γ,Γ
0
) = area(
0
) , (9)
VISAPP 2008 - International Conference on Computer Vision Theory and Applications
212
where the notation
0
:= (
0
)\(
0
) to
denote the symmetric difference of the two sets =
int(Γ),
0
= int(Γ
0
). The pseudo-distance has the
form:
E
PD
I
(Γ,Γ
0
) =
1
2
Z
R
[φ(x) φ
0
(x)]
2
dx , (10)
which has been studied, with various minor modifi-
cations, in (Rousson and Paragios, 2002), (Paragios
et al., 2003), and (Cremers and Soatto, 2003).
The main benefit of our interaction term defined
in (8) is that its L
2
-gradient can be computed easily:
Γ
E
I
(Γ,Γ
0
) = φ
0
(x) = φ(Γ
0
;x) (x Γ)
and that this gradient is small if Γ is close to the shape
prior Γ
0
, and large if the active contour is far from
the shape prior. However, E
I
(Γ,Γ
0
) is not symmet-
ric in Γ and Γ
0
, which may in general be considered
a drawback. However, in our particular application,
where we want to use shape information from a pre-
vious image frame (Γ
0
) to guide the segmentation in
the current frame (Γ), the lack of symmetry does not
seem to be such a big issue.
The proposed interaction term is constructed to be
pose-invariant and to allow moderate deformations in
shape. Let a R
2
is a group of translations. We want
to determine the optimal translation vector a = a(Γ),
then the interaction E
I
= E
I
(Γ
0
,Γ) is defined by the
formula,
E
I
(Γ
0
,Γ) = min
a
Z
int(Γ)
φ
0
(x a)dx. (11)
Minimizing over groups of transformations is the
standard device to obtain pose-invariant interactions,
see (Chan and Zhu, 2005) and (Cremers and Soatto,
2003).
Since this is an optimization problem a(Γ) can be
found using the gradient descent procedure. The opti-
mal translation a(Γ) can then be obtained as the limit,
as time t tends to infinity, of the solution to initial
value problem
˙
a(t) =
Z
int(Γ)
∇φ
0
(x a(t))dx , a(0) = 0 .
(12)
Similar gradient descent schemes can be devised for
rotations and scalings (in the case of similarity trans-
forms), cf. (Chan and Zhu, 2005).
2.3 Using the Interaction Term in
Segmentation of Image Sequences
Let I
j
: D R, j = 1,...,N, be a succession of N
frames from a given image sequence. Also, for some
integer k, 1 k N, suppose that all the frames
I
1
,I
2
,...,I
k1
have already been segmented, such that
the corresponding contours Γ
1
,Γ
2
,...,Γ
k1
are avail-
able. In order to take advantage of the prior knowl-
edge obtained from earlier frames in the segmentation
of I
k
, we propose the following method: If k = 1, i.e.
if no previous frames have actually been segmented,
then we just use the standard Chan-Vese model, as
presented in Sect. 2.1. If k > 1, then the segmentation
of I
k
is given by the contour Γ
k
which minimizes an
augmented Chan-Vese functional of the form,
E
A
CV
(Γ
k1
,Γ
k
) := E
CV
(Γ
k
) + γE
I
(Γ
k1
,Γ
k
), (13)
where E
CV
is the Chan-Vese functional, E
I
=
E
I
(Γ
k1
,Γ
k
) is an interaction term, which penalizes
deviations of the current active contour Γ
k
from the
previous one, Γ
k1
, and γ > 0 is a coupling constant
which determines the strength of the interaction. See
Algorithm 1.
The augmented Chan-Vese functional (13) is min-
imized using standard gradient descent (5) described
in Sect. 2.1 with E equal to
E
A
CV
(Γ
k1
,Γ
k
) := E
CV
(Γ
k
) + γ∇E
I
(Γ
k1
;Γ
k
),
(14)
and the initial contour Γ(0) = Γ
k1
. Here E
CV
is the
L
2
-gradient (6) of the Chan-Vese functional, and E
I
the L
2
-gradient of the interaction term, which is given
by the formula,
E
I
(Γ
k1
,Γ
k
;x) = φ
k1
(xa(Γ
k
)), (for x Γ
k
).
(15)
Here φ
k1
is the signed distance function for Γ
k1
.
Algorithm 1 The algorithm for segmentation of N
frames image sequence from the second frame I
2
...I
N
.
INPUT: Current frame I
k
and the level set function
from the previous frame φ
k1
OUTPUT: Optimal level set function φ
k
.
1. Initialization Initialize the level set function φ
k
=
φ
k1
.
2. Computation Compute the optimal translation
vector and then the gradient descent of (14).
3. Re-initialization Re-initialize the level set func-
tion φ
k
.
4. Convergence Stop if the level set evolution con-
verges, otherwise go to step 2.
3 OCCLUSION DETECTION BY
CONTOUR MATCHING
In this section we are going to present a variational
solution to a contour matching problem. We start with
NONRIGID OBJECT SEGMENTATION AND OCCLUSION DETECTION IN IMAGE SEQUENCES
213
the theory behind the contour matching problem and
then describe the algorithm we use to implement it
to detect and locate the occlusions. See (Gustavsson
et al., 2007) for more detail.
3.1 A Contour Matching Problem
Suppose we have two simple closed curves Γ
1
and
Γ
2
contained in the image domain . Find the “most
economical” mapping Φ = Φ(x) : R
2
such that
Φ maps Γ
1
onto Γ
2
, i.e. φ(Γ
1
) = Γ
2
. The latter condi-
tion is to be understood in the sense that if α = α(γ) :
[0,1] is a positively oriented parametrization of
Γ
1
, then β(γ) = Φ(α(γ)) : [0, 1] is a positively
oriented parametrization of Γ
2
(allowing some parts
of Γ
2
to be covered multiple times).
To present our variationalsolution of this problem,
let M denote the set of twice differential mappings
Φ which maps Γ
1
to Γ
2
in the above sense. Loosely
speaking
M = {Φ C
2
(;R
2
)|Φ(Γ
1
) = Γ
2
}.
Moreover, given a mapping Φ : R
2
, not neces-
sarily a member of M , then we express Φ in the form
Φ(x) = x + U(x), where the vector valued function
U = U(x) : R
2
is called the displacement field
associated with Φ, or simply the displacement field.
It is sometimes necessary to write out the components
of the displacement field; U(x) = (u
1
(x),u
2
(x))
T
.
We now define the “most economical” map to be
the member Φ
of M which minimizes the following
energy functional:
E[Φ] =
1
2
Z
kDU(x)k
2
F
dx , (16)
where kDU(x)k
F
denotes the Frobenius norm of
DU(x) = [u
1
(x),u
2
(x)]
T
, which for an arbitrary
matrix A R
2×2
is defined by kAk
2
F
= tr(A
T
A). That
is, the optimal matching is given by
Φ
= argmin
ΦM
E[Φ] . (17)
The solution Φ
of the minimization problem (17)
must satisfy the following Euler-Lagrange equation:
0 =
(
U
(U
· n
Γ
2
)n
Γ
2
, on Γ
1
,
U
, otherwise,
(18)
where n
Γ
2
(x) = n
Γ
2
(x + U
(x)), x Γ
1
, is the pull-
back of the normal field of the target contour Γ
2
to
the initial contour Γ
1
. The standard way of solv-
ing (18) is to use the gradient descent method: Let
U = U(t,x) be the time-dependent displacement field
which solves the evolution PDE
U
t
=
(
U (U ·n
Γ
2
)n
Γ
2
, on Γ
1
,
U, otherwise,
(19)
where the initial displacement U(0,x) = U
0
(x) M
specified by the user, and U = 0 on ∂Ω, the boundary
of (Dirichlet boundary condition). Then U
(x) =
lim
t
U(t,x) is a solution of the Euler-Lagrange
equation (18). Notice that the PDE (19) coincides
with the so-called geometry-constrained diffusion in-
troduced in (Andresen and Nielsen, 1999). Thus we
have found a variational formulation of the non-rigid
registration problem considered there.
Implementation. Following (Andresen and Nielsen,
1999), a time and space discrete algorithm for solving
the geometry-constrained diffusion problem can be
found by iteratively convolving the displacement field
with a Gaussian kernel and then project the deformed
contour Γ
1
back onto contour Γ
2
such that the con-
straints are satisfied (see Algorithm 2). The algorithm
needs a initial registration provided by the user. In our
implementation we have translated Γ
1
and projected it
onto Γ
2
and used this as the initial registration. This
gives good results in our case where the deformation
and translation is quite small. Dirichlet boundarycon-
dition - zero padding in the discrete implementation -
have been used. By pre-registration and embedding
the image into a larger image, the boundary condi-
tions seems to be a minor practical issue. The dis-
placement field is diffused using convolution in each
of x and y coordinates independently with a fix time
parameter.
Algorithm 2 The algorithm for the contour matching
INPUT : Contours Γ
1
and Γ
2
.
OUTPUT : Displacement field D.
1. Initial displacement field Initial registration of
the contours.
2. Diffusion Convolve the displacement field using
a Gaussian kernel.
3. Deformation Deform Γ
1
by applying the dis-
placement field D.
4. Projection Project the deformed Γ
1
onto Γ
2
(i.e.
find the closest point on the contour Γ
2
).
5. Updating the displacement field Update the dis-
placement field according to matching points on
the contour Γ
2
6. Convergence Stop if the displacement field is sta-
ble, otherwise go to step 2.
3.2 Occlusion Detection
The mapping Φ = Φ(x) : R
2
such that Φ maps
Γ
1
onto Γ
2
is an estimation of the displacement (mo-
VISAPP 2008 - International Conference on Computer Vision Theory and Applications
214
tion and deformation) of the boundary of an object be-
tween two frames. By finding the displacement of the
contour, a consistent displacement of the intensities
inside the closed curve Γ
1
can also be found. Φ maps
Γ
1
onto Γ
2
and pixels inside Γ
1
are mapped inside Γ
2
.
This displacement field which only depends on dis-
placement - or registration - of the contour (and not
on the image intensities) can then be used to map the
intensities inside Γ
1
onto Γ
2
. After the mapping, the
intensities inside Γ
1
and Γ
2
can be compared and then
be classified as the same or different value. Since we
can still find the contour in the occluded area, there-
fore we can also compute the displacement field even
in the occluded area.
Implementation. Occlusions are detected by com-
paring the predicted and the observed intensities in-
side the segmented object. Unfortunately the dis-
placement field is not exact: it is an estimation of the
contour displacement and simultaneously an interpo-
lation of the displacement for pixels inside Γ
1
. The in-
tensities in the deformed frame must be interpolated.
The interpolation can either be done in the deformed
(Lagrange) coordinate or in the original (Euler) coor-
dinate. The next neighbor interpolation scheme in the
Euler coordinate has been used. Both the deformed
and the current frames are filtered using a low-pass
filter to decrease differences due to the interpolation
and to the displacement.
The deformed frame, F
Deformed
p
(x), and the cur-
rent frame, F
c
(x), are compared pixel by pixel using
some similarity measures. The absolute differences
|F
Deformed
p
(x) F
c
(x)| are used in our experiments.
Different similarity measures require different degree
of low-pass filtering. A simple pixel by pixel simi-
larity measure requires more filtering, while a patch
based similarity measure may require less or none
low-pass filtering. See Algorithm 3.
4 EXPERIMENTAL RESULTS
Following the Algorithm 1, we implement the pro-
posed model to segment a selected object with ap-
proximately uniform intensity frame-by-frame. The
minimization of the functional is obtained by the gra-
dient descent procedure (14) implemented in the level
set framework outlined in Sect. 2.1. Since the Chan-
Vese segmentation model finds an optimal piecewise-
constant approximation to an image, this model works
best in segmenting object that has nearly uniform in-
tensity.
The choice of the coupling constant γ isdone man-
ually. It is varied to see the influence of the interaction
Algorithm 3 The algorithm for occlusion detection
using the displacement field to predict the contents in
the next frame inside a contour.
INPUT: The previous frame F
p
, the current frame F
c
,
displacement field D
OUTPUT: Occlusion mask.
1. Deformation Deform F
p
using displacement field
D into F
Deformed
p
.
2. Interpolation Interpolate F
Deformed
p
to get in in-
tensities in each grid point.
3. Low-pass filtering Low-pass filter the images
F
Deformed
p
and F
c
.
4. Similarity measure Compare F
Deformed
p
and F
c
inside contour Γ
2
using a similarity measure to get
a similarity measure for each pixel.
5. Thresholding Find occlusions by thresholding in
the similarity measure image.
term on the segmentation results. The contour is only
slightly affected by the prior if γ is small. On the other
hand, if γ is too large, the contour will be close to a
similarity transformed version of the prior. To choose
a proper γ is rather problematic in segmentation of
image sequences. Using strong prior can give good
results when the occlusions occur, but when segment-
ing the image frame where occlusions do not occur,
the results will be close to the prior.
In Fig. 1, we show the segmentation results for a
nonrigid object in a synthetic image sequence, where
occlusion (the gray bar) occurs. Another experiment
on a human walking image sequence shown in Fig. 3
where an occlusion (the superposition of another per-
son) occurs. In both experiments, the standard Chan-
Vese method fails to segment the selected object when
it reaches the occlusion (Top Row). The result can be
improvedby adding a frame-to-frameinteraction term
as proposed in (13) (Bottom Row). In these experi-
ments, we use quite large γ to deal with occlusions.
As we can see on the last frame in Fig. 3, the result is
close to a similarity transformed of the prior although
intensities in between the legs are different from the
object.
As described in Sect. 3.1 and Sect. 3.2, occlusion
can be detected and located. By using the segmenta-
tion results of the image sequences, we then imple-
ment the Algorithm 2 and 3 to detect and locate the
occlusions. In Fig. 2 and Fig. 4, we show the occluded
regions in the Frame 2-5 of Fig. 1 and in the Frame 2
of Fig. 3, respectively.
Having information about the location of the oc-
clusions in the image, the occluded region can be re-
NONRIGID OBJECT SEGMENTATION AND OCCLUSION DETECTION IN IMAGE SEQUENCES
215
Figure 1: Segmentation of a nonrigid object in a synthetic image sequence with additive Gaussian noise. Top Row: without
the interaction term, noise in the occlusion is captured. Bottom Row: with interaction term, we obtain better results.
Figure 2: Detected occlusions in the synthetic image sequence.
Figure 3: Segmentation of a walking person partly covered by an occlusion in the human walking sequence. Top Row: without
interaction term, and Bottom Row: with interaction term.
Figure 4: Detected occlusion in the human walking sequence.
Figure 5: Segmentation of the synthetic image sequence by using smaller coupling constant than the one in Fig. 1. Top row:
without reconstruction of the occluded regions. Bottom row: after the occluded regions are reconstructed.
VISAPP 2008 - International Conference on Computer Vision Theory and Applications
216
Figure 6: Segmentation of the human walking sequence when by using smaller coupling constant than the one in Fig. 3. Top
row: without reconstruction of the occluded regions. Bottom row: after the occluded region is reconstructed.
constructed in order to improve further the segmen-
tation results. Let Occ be the occlusion masks, e.g.
the output after implementing Algorithm 3. Here we
reconstruct the occluded regions by assigning the in-
tensity values in the occluded regions with the mean
value of the intensities inside the contour but exclud-
ing the occluded regions:
I(Occ) = µ
int
,
where
µ
int
= µ
int
(Γ) =
1
|int(Γ) \ Occ|
Z
int(Γ)\Occ
I(x)dx.
After we reconstruct the occluded regions, we imple-
ment the Algorithm 1 again by using smaller coupling
constant γ in order to allow more deformation of the
contours. As we can see from Fig. 5 and Fig. 6,
the results are better if we reconstruct the occluded
regions than the ones without reconstruction.
5 CONCLUSIONS
We have presented a method for segmentation and
occlusion detection of image sequences containing
nonrigid, moving objects. The proposed segmenta-
tion method is formulated as variational problem in
the level set framework, with one part of the func-
tional corresponding to the Chan-Vese model and an-
other part corresponding to the pose-invariant interac-
tion with a shape prior based on the previous contour.
The optimal transformation as well as the shape de-
formation are determined by minimization of an en-
ergy functional using a gradient descent scheme. The
segmentation results can then be used to detect the
occlusions by the proposed method which is formu-
lated as a variational contour matching problem. By
using occlusion information, the segmentation can be
further improved by reconstructing the occluded re-
gions. Preliminary results are shown and its perfor-
mance looks promising.
ACKNOWLEDGEMENTS
This research is funded by EU Marie Curie RTN FP6
project VISIONTRAIN (MRTN-CT-2004-005439).
The human walking sequence was downloaded from
EU funded CAVIAR project (IST 2001 37540) web-
site.
REFERENCES
Andresen, P. R. and Nielsen, M. (1999). Non-rigid regis-
tration by geometry-constrained diffusion. In Taylor,
C. and et al, editors, MICCAI’99, LNCS 1679, pages
533–543. Springer Verlag.
Bresson, X., Vandergheynst, P., and Thiran, J.-P. (2006).
A variational model for object segmentation using
boundary information and shape prior driven by the
mumford-shah functional. International Journal of
Computer Vision, 68(2):145–162.
Caselles, V., Kimmel, R., and Sapiro, G. (1997). Geodesic
active contours. International Journal of Computer
Vision, 22(1):61–79.
Chan, T. and Vese, L. (2001). Active contour without edges.
IEEE Transactions on Image Processing, 10(2):266–
277.
Chan, T. and Zhu, W. (2005). Level set based prior segmen-
tation. In Proceeding CVPR 2005, volume 2, pages
1164–1170.
Chen, Y., Tagare, H. D., Thiruvenkadam, S., Huang, F., Wil-
son, D., Gopinath, K. S., Briggs, R. W., and Geiser,
E. A. (2002). Using prior shapes in geometric ac-
tive contours in a variational framework. International
Journal of Computer Vision, 50(3):315–328.
Cremers, D. and Funka-Lea, G. (2005). Dynamical statisti-
cal shape priors for level set based sequence segmen-
tation. In Paragios, N. and et al., editors, 3rd Workshop
on Variational and Level Set Methods in Computer Vi-
sion, LNCS 3752, pages 210–221. Springer Verlag.
Cremers, D. and Soatto, S. (2003). A pseudo-distance for
shape priors in level set segmentation. In Faugeras,
O. and Paragios, N., editors, 2nd IEEE Workshop
NONRIGID OBJECT SEGMENTATION AND OCCLUSION DETECTION IN IMAGE SEQUENCES
217
on Variational, Geometric and Level Set Methods in
Computer Vision.
Cremers, D., Sochen, N., and Schn¨orr, C. (2003). To-
wards recognition-based variational segmentation us-
ing shape priors and dynamic labeling. In Griffin, L.
and Lillholm, M., editors, Scale Space 2003, LNCS
2695, pages 388–400. Springer Verlag.
Gentile, C., Camps, O., and Sznaier, M. (2004). Segmen-
tation for robust tracking in the presence of severe
occlusion. IEEE Transactions on Image Processing,
13(2):166–178.
Gustavsson, D., Fundana, K., Overgaard, N. C., Heyden,
A., and Nielsen, M. (2007). Variational segmentation
and contour matching of non-rigid moving object. In
ICCV Workshop on Dynamical Vision 2007, LNCS.
Springer Verlag.
Konrad, J. and Ristivojevic, M. (2003). Video segmentation
and occlusion detection over multiple frames. In Va-
sudev, B., Hsing, T. R., Tescher, A. G., and Ebrahimi,
T., editors, Image and Video Communications and
Processing 2003, SPIE 5022, pages 377–388. SPIE.
Leventon, M., Grimson, W., and Faugeras, O. Statistical
shape influence in geodesic active contours. In Proc.
Int’l Conf. Computer Vision and Pattern Recognition,
pages 316–323.
Osher, S. and Fedkiw, R. (2003). Level Set Methods and Dy-
namic Implicit Surfaces. Springer-Verlag, New York.
Paragios, N., Rousson, M., and Ramesh, V. (2003). Match-
ing Distanve Functions: A Shape-to-Area Variational
Approach for Global-to-Local Registration. In Hey-
den, A. and et al, editors, ECCV 2002, LNCS 2351,
pages 775–789. Springer-Verlag Berlin Heidelberg.
Riklin-Raviv, T., Kiryati, N., and Sochen, N. (2007). Prior-
based segmentation and shape registration in the pres-
ence of perspective distortion. International Journal
of Computer Vision, 72(3):309–328.
Rousson, M. and Paragios, N. (2002). Shape priors for level
set representations. In Heyden, A. and et al, editors,
ECCV 2002, LNCS 2351, pages 78–92. Springer Ver-
lag.
Solem, J. E. and Overgaard, N. C. (2005). A geometric for-
mulation of gradient descent for variational problems
with moving surfaces. In Kimmel, R., Sochen, N., and
Weickert, J., editors, Scale-Space 2005, volume 3459
of LNCS, pages 419–430. Springer Verlag.
Strecha, C., Fransens, R., and Gool, L. V. (2004). A prob-
abilistic approach to large displacement optical flow
and occlusion detection. In Statistical Methods in
Video Processing, LNCS 3247, pages 71–82. Springer
Verlag.
Thiruvenkadam, S. R., Chan, T. F., and Hong, B.-W. (2007).
Segmentation under occlusions using selective shape
prior. In Scale Space and Variational Methods in
Computer Vision, volume 4485 of LNCS, pages 191–
202. Springer Verlag.
Tsai, A., Yezzy, A., Wells, W., Tempany, C., Tucker, D.,
Fan, A., Grimson, W. W., and Willsky, A. (2003). A
shape-based approach to the segmentation of medical
imagery using level sets. IEEE Transactions on Med-
ical Imaging, 22(2):137–154.
VISAPP 2008 - International Conference on Computer Vision Theory and Applications
218