the converse may be preferable (Dukart et al., 2011;
Dukart et al., 2013).
We present a dimensionality reduction approach
based on data clustering to reduce the high-
dimensional feature space of neuroimages. This ap-
proach may be used with several imaging modalities
and/or combined with other feature extraction tech-
niques. In this paper, our approach is applied to the
voxel intensities of FDG-PET images and compared
with a scale-space representation using the Gaussian
pyramid technique in three classification problems:
AD vs CN, MCI vs CN, and AD vs MCI.
The remaining of this paper is organized as fol-
lows. Section 2 describes the feature extraction, selec-
tion, and classification steps included in the method-
ology presented in this paper. Section 3 exhibits the
data set, the experimental design and the correspond-
ing results. Finally, section 4 concludes this paper.
2 METHODOLOGY
Our goal is to diagnose the condition of a given pa-
tient using its FDG-PET scan by learning from a set
of labeled images whose conditions are known. We
use voxel intensities, V (x, y, z), obtained directly from
the FDG-PET scan, to identify the condition of a pa-
tient. V (x, y, z) denotes the value of the FDG uptake
detected at the voxel located at the space position
(x, y, z), where x, y and z are integer numbers.
Our methodology to build a computer-aided diag-
nosis system capable of distinguishing different pa-
tient conditions has three steps: reduce the number
of features to improve system performance; select the
most important features; and, finally, train a classifi-
cation algorithm. Note that, in this work we focus in
comparing feature extraction methods, namely, one
using the Gaussian pyramid and another using data
clustering. We also compare the previous techniques
that reduce the number of features with one strategy
that uses the whole-brain information. Also, we are
using the voxel intensities of FDG-PET images but
other imaging modalities, such as MRI or SPECT,
could be used instead.
2.1 Feature Extraction using the
Gaussian Pyramid
A problem when dealing with a 3-dimensional FDG-
PET image is the huge amount of features it contains,
which may degrade the performance of pattern recog-
nition algorithms. However, the intensities of voxels
that are close in space tend to be similar and, conse-
quently, some redundant information may be elimi-
nated.
The Gaussian pyramid (Burt, 1981) is a tech-
nique that creates a sequence of images which are
smoothed using a Gaussian average, and then scaled
down. These images are successively smaller due to
subsampling, and each voxel at a given level contains
the average neighborhood’s voxel intensity of the cor-
responding voxel on the previous level of the pyramid.
The technique works as follows. In the first step,
the image is smoothed as
V
l
(x, y, z) =
2
∑
m=−2
2
∑
n=−2
2
∑
o=−2
w(m, n, o)V
l−1
(2x + m, 2y + n, 2z + o),
(1)
for l = 1, 2, ···, with V
0
(x, y, z) = V (x, y, z) where
V
l
represents the l
th
level of the pyramid, and
w(m, n, o) = w(m) · w(n) · w(o) is a weighting func-
tion or generating kernel. The level 0 corresponds
to the original image. The generating kernel used
in this work has width 5 and is defined as w(m) =
w(n) = w(o) = w
m+3
, m ∈ {−2, −1, · ·· , 2}, where
w =
1
16
[1 4 6 4 1], which resembles a Gaussian func-
tion. In the second step, the image is subsampled by
a factor of two in each of the dimensions.
Figure 1 shows an example of applying the Gaus-
sian pyramid to a 128 × 128 × 60 FDG-PET image (a
slice for each image is shown).
2.2 Feature Extraction using Data
Clustering
We propose to perform feature extraction using a data
clustering algorithm. The objective of data cluster-
ing consists of dividing a data set, X , composed of n
data objects {x
1
, ·· · , x
n
}, into K clusters {C
1
, ·· · ,C
K
}
such that similar objects, x
i
, x
j
, are placed in the same
cluster, i.e {x
i
, x
j
} ∈ C
k
, and dissimilar objects are
grouped in different clusters, i.e. x
i
∈ C
k
, x
j
∈ C
l
, k 6=
l. The resulting labels of a partition P = {P
1
, ·· · , P
n
}
indicate the cluster to which each object belongs. We
intend to group voxels in a FDG-PET image into
clusters to reduce redundant information and, conse-
quently, decrease the number of features for the clas-
sification task. The clusters should represent regions
in the 3-dimensional space with similar voxel inten-
sities. The methodology to find these regions is ex-
plained in the following.
Let V
p
(x, y, z) represent the voxel (x, y, z) of the
p
th
FDG-PET image in a database containing q im-
ages. First, a mean brain image V
∗
is computed by
averaging the corresponding voxel (x, y, z) over the
entire population:
V
∗
(x, y, z) =
1
q
q
∑
i=1
V
i
(x, y, z). (2)
ICPRAM2014-InternationalConferenceonPatternRecognitionApplicationsandMethods
562