(a) (b) (c) (d) (e)
Figure 2: (a) Part of a retinal image. (b) Result of morphological contrast and median filter. (c) Result of matched filters,
thresholding and cleaning. (d) Thinned and pruned binary image. (e) Detected landmarks (white dots).
2 FEATURE-BASED TECHNIQUE
A feature-based registration method is made of three
main steps: landmarks extraction, matching between
landmarks and image transformation.
2.1 Landmarks Extraction
Traditionnally, bifurcations are extracted automati-
cally by a retinal vessels segmentation, followed by
thinning and branch point analysis. For example,
Zana and Klein (Zana and Klein, 1999) enhanced
vessels with a sum of top-hats with linear revolving
structuring elements and detected bifurcations using
a supremum of openings with revolving structuring
elements with T shape. In (Becker et al., 1998), the
boundaries of retinal vessels are detected using stan-
dard Sobel filter and the vasculature is thickened us-
ing a minimum filter.
Proposed Methods. Retinal vessels can be approx-
imated by a succession of linear segments (of length
L) at different orientations. Afterwards, all used pa-
rameter values are experimetal. First, retinal ves-
sels are emphasized using a minimum of morpholog-
ical contrasts with linear (L = 9 pixels) and revolving
(30˚ increments) structuring elements. A median fil-
ter smoothed the result image (Figure 2.b). Second,
retinal vessels, whose cross section can be approxi-
mated by a Gaussian shaped curve (standard deviation
σ), are detected by matched filters with 6 orientations,
and with L = 9 pixels and σ = 2 (Chaudhuri et al.,
1989). Next, the thresholded image is cleaned: small
objects (≤ 200 pixels) and small holes (≤ 15 pixels)
are deleted (Figure 2.c). Third, the centreline of the
vascular tree is obtained with a thinning operation and
is pruned so as to eliminate small branches (≤ 15 pix-
els) (Figure 2.d). Fourth, bifurcations are extracted as
skeleton pixels with at least six binary transitions be-
tween adjacent pixels of V8 or V16 neighbourhoods.
Finally, adjacent and closer landmarks (≤ 10 pixels)
are joined: the new landmark corresponds to the cen-
tre of mass of the system with equal weights and may
not belong to the skeleton (Figure 2.e).
2.2 Landmarks Matching
After extraction, pairs of matching landmarks need
to be determined between both images. (Can et al.,
2002) and (Zana and Klein, 1999) suggested similar-
ity measures between bifurcations depending on sur-
roundingvessels angles. Due to nonuniformillumina-
tions, a similarity measure may not be robust. (Becker
et al., 1998) and (Ryan et al., 2004) computed simple
transformation parameters from all possible combina-
tions of landmarks. From this data set, matched land-
mark pairs form a tight cluster which is unfortunately
demarcated with difficulty.
Proposed Methods. Let I
p
and I
q
denote both im-
ages called arbitrarily reference and transformed im-
age respectively and with P and Q extracted land-
mark sets respectively. (u,v) and (u
′
,v
′
) are the coor-
dinates systems of I
p
and I
q
respectively. In this paper,
the matching technique proceeds in two steps.
The first step is a similarity measure between land-
mark signatures of both images and results in an ini-
tial couples set S. For a landmark p, the signature
is the number of surrounding vessels n
p
and the an-
gles between them θ
p
1
,··· ,θ
p
n
p
obtained by comput-
ing the intersection between the pruned skeleton and
a circle of fixed diameter o = 24 pixels centred on it.
For each (p,q) belonging to P × Q , (p,q) belongs to
S if and only if n = n
p
= n
q
≤ 5 and θ
q
i
− α ≤ θ
p
i
≤
θ
q
i
+ α for i = 1,· ·· ,n and with α = 10˚ . This step
restricts landmarks sets P and Q before the second
step which is more time consuming.
The second step consists in estimating, for each
initial couple, the spatial agencement of landmarks
between the two images (Figure 3). For an initial
couple (p
S
j
,q
S
j
) belonging to S, whose locations
(u
S
j
,v
S
j
) and (u
′
S
j
,v
′
S
j
) constitute now images origins,
landmarks from the two images that have the same lo-
cations with a given tolerance value δ are preserved :
C
j
=
(p,q) ∈ P × Q | q = (u
′
,v
′
)
∈ [u
′
S
j
+ ∆u− δ ; u
′
S
j
+ ∆u+ δ]
×[v
′
S
j
+ ∆v− δ ; v
′
S
j
+ ∆v+ δ]
, (1)
with ∆u = u − u
S
j
, ∆v = v− v
S
j
, p = (u,v) and δ = 8
pixels. The final matching set C corresponds to the
VISAPP 2009 - International Conference on Computer Vision Theory and Applications
266