AN AUTOMATED APPROACH FOR PREPROCESSING
RETINOGRAPHIES
Silvia Alayón, José Luis Sánchez, José Sigut, Jorge Marrero
Department of Ingeniería de Sistemas y Automática y Arquitectura y Tecnología de Computadores
University of La Laguna, Spain
Manuel González
Department of Cirugía, Oftalmología y Otorrinolaringología, University of La Laguna, Spain
Keywords: Retinography, Glaucoma, Image preprocessing, Color spaces, Illumination correction.
Abstract: A retinography is a retinal photography useful for the precise tracking of any retinal pathology, especially
the glaucoma. Although there are sophisticated procedures for studying the evolution of the optic nerve,
sometimes it is not feasible the development of a rigorous tracking due to the high number of patients, the
high cost of the procedure and the need of high qualified staff. The design of an automated method for
detecting this pathology in its first stages through the automated analysis of retinographies could reduce the
cost of the process and the number of required specialists. Inspired by this objective, an automated
preprocessing method for retinographies is presented in this paper. The proposed methodology combines
information of different color spaces for achieving illumination and contrast enhancement.
1 INTRODUCTION
A retinography is a retinal photography useful for
the precise tracking of any retinal pathology,
especially the glaucoma. Glaucoma refers to a group
of diseases that affect the optic nerve and involves a
loss of retinal ganglion cells in a characteristic
pattern. Glaucoma without treatment leads to an
irreversible damaged of the optic nerve and to a loss
of the visual field that could originate partial or total
blindness. Therefore, one of the regions of interest in
the study of retinographies is the optic nerve head
(ONH).
Glaucoma is the most frequent cause of
blindness in the industrialized countries. The best
prevention method to delay its evolution and avoid
the vision loss is the regular tracking of the medical
treatment. The design of an automated method for
detecting this pathology in its first stages through the
automated analysis of retinographies could reduce
the cost of the process and the number of required
specialists.
The application of image processing techniques
on the retinography analysis is an increasing
research field. These techniques are not automated
and they normally require the user intervention. The
computer techniques for processing eye fundus
images usually involve the manual or semiautomatic
draw of the papilla contour and other structures of
the ONH. However, the objective of minimizing the
human intervention in these systems is appearing in
related research works (Cox et al., 1991), (Iqbal et
al., 2006), (Teng et al., 2002).
The work presented in this paper is part of a
research project developed in collaboration with the
Hospital Universitario de Canarias. The main
objective of the project is the design of an automated
software system for the delimitation of the ONH and
for the differentiation of superimposed structures
(arteries, vessels). Due to the irregularities presented
in the analyzed images it is essential to carry out a
preprocessing stage, in order to improve the image
before developing segmentation of the different
regions of interest. This enhancement process is the
core of this paper.
The problem of preprocessing retinographies has
been tackled by several researchers (Himaga et al.,
2002), (Fang et al., 2003) , (Echevarría et al., 2004),
(Chaudhuri et al., 1989). The preprocessing method
proposed in this paper pursues the improvement of
the illumination and contrast characteristics of the
images by using information of different color
381
Alayón S., Luis Sánchez J., Sigut J., Marrero J. and González M. (2010).
AN AUTOMATED APPROACH FOR PREPROCESSING RETINOGRAPHIES.
In Proceedings of the Third International Conference on Bio-inspired Systems and Signal Processing, pages 381-384
DOI: 10.5220/0002690803810384
Copyright
c
SciTePress
spaces. In the next sections the proposed method is
explained, the experimental results are shown and
our main conclusions are offered.
2 DESCRIPTION OF THE
MEDICAL PROBLEM
The data base used in this research work consists of
28 high resolution images. These images have been
acquired in the Hospital Universitario de Canarias
with a Canon D5 camera.
A retinography of our data base is shown in
figure 1. Its different structures and parts are
marked.
Figure 1: Parts of a retinography: (a) Papilla or optic
Nerve Head (ONH), (b) Empty zone, (c) Blood vessels,
(d) Retina, (e) Macula.
The excavation of papilla of optic nerve is the
lightest zone inside the ONH. There are no nerve
fibers in this zone. A big excavation of papilla
involves a loss of nerve fibers and is sign of a
glaucomatous papilla. Therefore, the automatic
detection of this region can be helpful for the
medical diagnosis process.
3 DESCRIPTION OF THE
PROPOSED PREPROCESSING
METHOD
3.1 Step 1: Scaling of the RGB
Histogram
A narrow histogram produces low contrast images,
independently of the image luminosity. A simple
manner to develop contrast enhancement is to
increase the histogram’s dynamic range. The scaling
of the histogram preserves the shape of the
histogram and does not increase the background
noise.
The RGB histograms of the retinographies
(see figure 2, red) present a narrow distribution for
G and B components (with a low average) and a
wider distribution for the R.
Histograms’ bimodality is clearly appreciated.
The lower part represents the empty zone and the
upper part corresponds to the ocular globe.
Therefore, it is necessary the elimination of the
distribution corresponding to the empty zone before
expanding the histogram by the left side.
Once bimodality correction has been developed
on the original histogram, a new histogram is
obtained. We denote it as a. The scaling process is
carried out as shown in equation 1, where min is the
value n2, max is the percentile 99, (2
B
-1) is the
number of levels of the desired range (255 in our
case) and b is the resultant histogram after the
scaling procedure. Figure 2 shows the result of the
scaling of the RGB histograms. It can be observed
how each distribution is now wider.
(1)
Figure 2: Original histogram (red) and scaled histogram
(blue) of R, G and B components.
3.2 Step 2: Luminance Equalization in
YIQ Space
The aim of this step is the equalization of the image
luminance (to enhance the contrast and brightness of
the image) without loss of color information. As
result of this operation the blood vessels will be
better defined in the three RGB components.
In the YIQ color space all the information about
the image luminance is concentrated in Y
component Therefore, an initial conversion from
minmax
min],[
)12(],[
=
nma
nmb
B
(a)
(b)
(c)
(d)
(e)
BIOSIGNALS 2010 - International Conference on Bio-inspired Systems and Signal Processing
382
RGB to YIQ is required and the Y component must
be extracted. The luminance equalization is carried
out through an adaptive equalization procedure.
Finally, the image is composed again in the YIQ
space and converted to the RGB color space.
3.3 Step 3: Illumination Correction
The studied retinographies present illumination
inhomogeneities. The illumination growths slowly
from the borders to the center producing dark areas
near the border.
For the tackled problem in this paper, images for
calibration purposes have not been acquired.
Therefore, the illumination correction must be done
through a posteriori estimation. According to
(Young et al., 1998), if I(x,y) is an image with
illumination inhomogeneities, LP is a low pass filter
and c a constant, the image after the illumination
correction I
c
(x,y) will be the result of applying
equation2.
cyxILPyxIyxI
c
+= )},({),(),( (2)
Such formula can be implemented by separable
kernel filters (one for rows and the other for
columns). The low pass filter is calculated as the
mean of every row or column and the constant to be
added is the total average of the image (scalar) M, as
it is shown in equation 3.
(3)
where M represents the scalar average of a
component, CR is an image in which every column
is substituted by the mean of the column and CC an
image in which ever row is substituted by its mean.
Although order is not important, we have done in
first place the correction by columns and then by
rows for each component RGB.
A modification to the application of equation 3
on each image component must be done because
some results have shown an illumination
overcorrection. There are outstanding brighter zones
in the retinographies (the ONH zone, as example).
For these zones the row and column averages will be
considerably higher than for the rest of zones of the
image. The final illumination correction method
develops a double process on the image, as shown in
figure 3:
- Sub-process A: apply equation 3 on the image
previously prepared as explained in the last
subsection, to avoid overcorrection. A global
illumination correction is obtained without
considering the influence of the high illuminated
zones.
- Sub-process B: apply equation 3 on the image
without previous preparation. Therefore, an image
with illumination overcorrection is obtained.
The last operation is the combination of the
obtained images in both sub-processes: pixels of the
high illuminated zones are extracted from the image
of sub-process B; the other pixels are extracted from
the image of sub-process A.
Figure 3: Complete illumination correction procedure
(step 3).
4 EXPERIMENTAL RESULTS
The experimental results obtained with the proposed
preprocessing method for retinographies are offered
in figure 4. The figure shows the results
corresponding to 8 retinographies of the data base of
the Hospital Universitario de Canarias.
5 CONCLUSIONS
The application of image processing techniques on
the retinography analysis is an increasing research
field. Different techniques have been studied and
developed, but these techniques are not automated
Sub-process A
Image after step 2
Final image
Extraction of high
illuminated pixels
Application of ecuation 3
Replacement of high
illuminated pixels’ values
Extraction of not high
illuminated pixels
Localization of high illuminated zones
Compose
Application of
equation 3
Sub-process B
MCCIRIC
MCRIIR
yxImeanM
+=
+=
= )),((
AN AUTOMATED APPROACH FOR PREPROCESSING RETINOGRAPHIES
383
and they normally require the user intervention. The
work presented in this paper is part of a research
project developed in collaboration with the Hospital
Universitario de Canarias. The main objective of the
project is the design of an automated software
system for the delimitation of the ONH and for the
differentiation of superimposed structures (arteries,
vessels, and so on).
Figure 4: Experimental results. From left to right: original
image, image after step 1, image after step 2, image after
step 3.
Due to the difficulties presented by the
retinographies it is essential to carry out a
preprocessing stage, in order to improve the image
before developing segmentation of the different
regions of interest. This enhancement process has
been the core of this paper.
The proposed preprocessing method comprises
three steps: 1) scaling of the RGB histogram
(improvement of the image information), 2)
luminance equalization in YIQ color space (contrast
enhancement without loss of color information), and
3) illumination correction. As shown in the
experimental results, the preprocessing method
develops an important enhancement of the main
structures contained in the image. This will
considerably allow designing efficient segmentation
algorithms.
Our future research work will be oriented to the
design of automated and efficient segmentation
methods to be applied on the preprocessed images
obtained in this work.
ACKNOWLEDGEMENTS
This research work has been partially financiated by
the project ULLAPD-08/01 of the Agencia Canaria
de Investigación, Innovación y Sociedad de la
Información.
REFERENCES
Chaudhuri, S., Chatterjee, S., Katz, N., Nelson, M.,
Goldbaum, M., 1989. Detection of blood vessels in
retinal images using two-dimensionalmatched filters.
IEEE Trans. Med. Imag; vol. 8(3), pp.263-269.
Cox, J., Wood I., 1991. Computer-assisted optic nerve
head assessment. Ophthal. Physiol. Opt.; 11, pp.27-35.
Echevarria, P., Miller, T., O'Meara, J., 2004. Blood Vessel
Segmentation in Retinal Images. Project P14: Blood
Vessel Segmentation in Retinal Images.
http://robots.stanford.edu/cs223b04/project14.html.
Fang, B., Hsu, W., Lee, M.L., 2003. On the Detection of
Retinal Vessels in Fundus Images. Singapore-MIT
Alliance National University of Singapore. Computer
Science (CS).
Himaga, M., Usher, D., Boyce, J.F., 2002. Retinal Blood
Vessel Extraction by using Multi-resolution Matched
Filtering and Directional Region Growing
Segmentation. IAPR Workshop on Machine Vision
Applications.
Iqbal, M.I., Aibinu, A.M., Gubbal, N.S., Khan, A., 2006.
Automatic Diagnosis of Diabetic Retinopathy using
Fundus Images” University essay from Blekinge
Tekniska Högskola/Sektionen för Teknik (TEK).
Tesis.
Teng, T., Lefley, M., Claremont, D., 2002. Progress
towards automated diabetic ocular screening: a review
of image analysis and intelligent systems for diabetic
retinopathy. Med. Biol. Eng. Comput.; 40(1:2-13.
Young, I.Y., Gerbrands, J.J., Van Vliet, L.J., 1998. Image
Processing Fundamentals, in The Digital Signal
Processing Handbook, V. K. Madisetti and D. B.
Williams, Eds. Boca Raton, Florida: CRC Press in
cooperation with IEEE Press.
BIOSIGNALS 2010 - International Conference on Bio-inspired Systems and Signal Processing
384