3 EXPERIMENTAL RESULTS
Our experiments were conducted on the basis of
ECG data from the MIT-BIH arrhythmia database
(Mark et Moody, 1997). In particular, the considered
beats make reference to the following classes:
normal sinus rhythm (N), atrial premature beat (A),
ventricular premature beat (V), right bundle branch
block (RB), left bundle branch block (LB), and
paced beat (/). Similarly to (Inan et al., 2006), the
beats were selected from the recordings of 18
patients, which correspond to the following files:
100, 102, 104, 105, 106, 107, 118, 119, 200, 201,
203, 205, 208, 212, 213, 214, 215, and 217. For
feeding the classification process, we adopted in this
study the two following kinds of features: i) ECG
morphology features; and ii) three ECG temporal
features that are the QRS complex duration, the RR
interval (i.e., time span between two consecutive R
points representing the distance between the QRS
peaks of the present and previous beats), and the RR
interval averaged over the ten last beats (de Chazal
et Reilly, 2006). The total number of morphology
and temporal features is equal to 303 for each beat.
In order to train the classification process and to
assess its accuracy, we selected randomly from the
considered recordings 500 beats for the training set,
whereas 42185 beats were used as test set (thus, the
training set represents just 1.18% of the test set). The
detailed numbers of training and test beats are
reported for each class in Table 1. Classification
performance was evaluated in terms of three
accuracy measures, which are: 1) the overall
accuracy (OA); 2) the accuracy of each class; and 3)
the average accuracy (AA).
Due to the good performances generally
achieved by the nonlinear SVM classifier based on
the Gaussian kernel [6], we adopted this kernel in all
experiments. The parameters C and γ were varied in
the ranges [10
-3
, 200] and [10
-3
, 2], respectively. The
k value and the number of hidden nodes (h) of the
kNN and the RBF classifiers were tuned in the
intervals [1, 15] and [10, 60], respectively.
Concerning the PSO algorithm, we considered the
following standard parameters: swarm size S=40,
inertia weight w=0.4, acceleration constants c
1
and
c
2
equal to the unity, and maximum number of
iterations fixed to 40.
3.1 Experiment 1: Classification in the
Original Feature Space
In this experiment, we applied the SVM classifier
directly on the whole original hyperdimensional
feature space which is composed of 303 features.
During the training phase, the SVM parameters (i.e.,
C and γ) were selected according to a m-fold cross-
validation (CV) procedure. In all experiments
reported in this paper, we adopted a 5-fold CV. The
same procedure was adopted to find the best
parameters for the kNN and the RBF classifiers. The
best values obtained for the three investigated
classifiers are C=25, γ=0.5, k=3 and h=20. As
reported in Table 2, the OA and AA accuracies
achieved by the SVM classifier on the test set are
equal to 87.95% and 87.60%, respectively. These
results are much better than those achieved by the
RBF and the kNN classifiers. Indeed, the OA and
AA accuracies are equal to 82.78% and 82.34% for
the RBF classifier, and 78.21% and 79.34% for the
kNN classifier, respectively. This experiment
appears to confirm what observed in other
application fields, i.e., the superiority of SVM with
respect to traditional classifiers when dealing with
feature spaces of very high dimensionality.
3.2 Experiment 2: Classification based
on Feature Reduction
In this experiment, we trained the SVM classifier in
feature subspaces of various dimensionalities. The
desired number of features was varied from 10 to 50
with a step of 10, namely from small to high
dimensional feature subspaces. Feature reduction
was achieved by means of the traditional Principal
Component Analysis (PCA) algorithm. Figure 1-a
depicts the results obtained in terms of OA by the
three considered classifiers combined with the PCA
algorithm, namely the PCA-SVM, the PCA-RBF
and the PCA-kNN classifiers. In particular, it can be
seen that, for all feature subspace dimensionalities
except the lowest one (i.e., 10 features), the PCA-
SVM classifier maintains a clear superiority over the
two other classifiers. Its best accuracy was found by
using a feature subspace composed of the first 40
components. The corresponding OA and AA
accuracies are equal to 88.98% and 88%,
respectively. Comparing these results with those
obtained by the SVM classifier in the original
feature space (i.e., without feature reduction), a
slight increase of 1.03% and 0.4% in terms of OA
and AA, respectively, was achieved (see Table 2).
3.3 Experiment 3: Classification with
PSO-SVM
In this experiment, we applied the PSO-SVM
classifier on the available training beats. At
BIOSIGNALS 2008 - International Conference on Bio-inspired Systems and Signal Processing
22