sometimes parts of gray matter and non-brain tissues
are merged to one segment due to narrow gray value
bridges. Therefore some postprocessing steps are
needed: First the CSC segments are preliminary
classified in brain and non-brain by their mean gray
value (see 3.4.1). As only the mean gray value is
considered segments containing non-brain tissues
with an intensity similar to gray or white matter are
classified always as brain. This problem is solved by
morphological operations (see 3.4.2). Finally the
brain is separated into gray and white matter.
3.4.1 Preliminary Classification
We obtain a preliminary brain mask by doing a
classification of CSC segments in which we use the
intensity thresholds (see section 3.2) to select all
segments which could belong to the brain. I.e. if the
mean intensity of a segment belongs to the range [t2,
t4] the segment should be kept in the preliminary
brain mask otherwise discarded. However, even if
optimal thresholds are used, connections between
the brain and surrounding non-brain tissues still
occur. In order to break these connections and
reduce misclassification, morphological operations
are applied.
3.4.2 Morphological Operations and Final
Classification
We apply the following morphological operations to
break up bridges between brain and non-brain
tissues:
Select the largest connected component (LCC1) in
the preliminary brain mask and perform an erosion
with a ball structuring element with radius of 3-5
voxels (depending on the size of the input image).
This breaks connections between the brain and
non-brain tissues.
Select the largest connected component (LCC2)
after the erosion and perform a dilation with the
same size structuring element to get LCC3. This
reconstructs the eroded brain segment.
Compute the geodesic distances to LCC3 from all
points which only belong to LCC1 but not to
LCC3 using a 1 voxel radius ball structuring
element. Then assign all points whose distances
are <= 4 voxels to LCC3 as the final segmented
brain. In this step some more detailed structures of
the segmented brain are recovered
At last, we remove all voxels not belonging to
the brain mask from the CSC segments. The
threshold t3 is then applied to classify the remaining
segments into gray matter (GM) and white matter
(WM).
4 EXPERIMENTS AND RESULTS
To assess the performance of the proposed method,
we applied it to 18 T1-weighted MR brain images
(10 simulated images and 8 real images). The
simulated images were downloaded from the
Brainweb site (http://www.bic.mni.mcgill.ca/
brainweb). These images consist of 181x217x181
voxels sized 1x1x1mm with a gray value depth of 8
bits. 1%, 3%, 5%, 7% resp. 9% noise levels have
been added and intensity inhomogeneity levels
(“RF”) are 20% and 40%. The real images were
acquired at 1.5 Tesla with an AVANTO SIEMENS
scanner from the BWZK hospital in Koblenz,
Germany. They consist of 384x512x192 voxels
with 12 bits gray value depth. The voxels are sized
0.45x0.45x0.9mm.
All processes were performed on an Intel P4
3GHz-based system. The execution time of the
complete algorithm is about 24 seconds for a
181x217x181 image.
Some parameters need to be set for the bias field
correction: The factor k in equation (3) which
controls the speed of iterative correction, was set to
0.05. The standard deviation of the Gaussian filter
determines the smoothness of correction. For the
simulated images it was set to 30 for each
dimension, but for the real images due to the
anisotropic voxel resolution to 60x60x30. The
termination threshold E of the bias field correction
was set to 0.001 which automatically determines the
iterations according to the degree of inhomogeneity
and ensures the accuracy of correction. Figure 5
shows a correction example of a simulated image.
The intensities of voxels belonging to the same
tissue become relatively homogeneous in the
corrected image (see Figure 5(b)). The misclassified
part of the white matter (see Figure 5(e)) in the
segmentation without bias field correction is
recovered in the segmentation with bias field
correction (see Figure 5(f)).
The Brainweb site provides the “ground truth”
for the simulated images that enables us to evaluate
the proposed method quantitatively. We use the
following evaluation measures:
Coverability Rate (CR) is the number of voxels in
the segmented object (S) that belong to the same
object (O) in the ”ground truth”, divided by the
number of voxels in O.
Error Rate (ER) is the number of voxels in S that
do not belong to O, divided by the number of
voxels in S.
Similarity Index (SI) (Stokking 2000) is two times
the number of voxels in the segmented object (S)
VISAPP 2006 - IMAGE ANALYSIS
342