cipal directions gives an envelop around object class
data and provides strategy for quick rejection of non-
object class data.
The kernel methods have been proposed for the
task developing data based models as well. The
traditional data based modeling technique PCA is
extended to handle higher order correlations in the
data by mapping into higher dimensional feature
space - Kernel Principal Component Analysis(KPCA)
(Scholkopf et al., 1998). It is simple enough com-
pared to PCA in terms of just finding eigen value
decomposition. It finds the uncorrelated features in
higher dimensional space, explaining structure of pos-
itive class data. But in object detection problems as
we use many thousands of object class data for train-
ing of KPCA, the run-time computational complexity
blows up.
In this work, we applied cascaded structure for
object detection, in which the removal negative sam-
ples are taken care in different stages according to
their degree of closeness to positive class distribution.
Therefore, it it required to have strong classifiers in
the later stages for separating difficult negative sam-
ples from positive class. Hence, for discriminating
non-objects which are like objects, we need to resort
to strong classifiers (Heisele et al., 2003). Tradition-
ally artificial neural networks are developed as strong
classifier. However, neural networks demand lengthy
training, convergence of the training process is some-
times uncertain and choice of network architectures
remains somewhat of an art. In early 1990s, the kernel
methods (Vapnik, 1999) such as Support Vector Clas-
sifiers(SVC), regressors are developed for classifica-
tion and function approximation tasks. The advan-
tage of these methods over neural network methods is
that they implicitly solve the nonlinear problem. Also
they exhibit good generalization capability because of
their regularization properties.
Another data based modeling technique in ker-
nel feature space is Support Vector Data Description
(SVDD)(Tax and R.P.W, 2004). It tries to find a en-
closing sphere of minimal volume for positive class
data in high dimensional feature space unlike SVC,
which tries to find the hyperplane between positive
and negative class training data. This kind of model is
particularly suited for object detection problem (Seo
and Ko, 2004) (Tax and R.P.W, 2004). But, the daunt-
ing disadvantage with SVDD when applied to object
detection is the number of kernel computations in-
volved. The number of kernel computations involved
is order of the number of support vectors generated
during training procedure. In typical problems of ob-
ject detection (face detection and people detection)
that we are targeting, these support vectors are as high
as few thousands. Because of these high number of
Support Vectors (SVs) the computational cost may be
sometimes between half a minute to few minutes.
In this paper to tackle this computational cost, we
propose to “leverage the technique of reducing the
number of support vectors (Romdhani et al., 2001)
into SVDD”. The number of support vectors can
be reduced to few hundreds from thousands without
compromising much on accuracy.
The quick rejection of non-object data using lin-
ear PCA and associated test statistics, followed by
Reduced Set SVDD leads to a good balance between
speed and accuracy. Hence, we propose a efficient
(both in terms of speed and accuracy) method for the
problem of object detection by novel cascade of lin-
ear PCA modeling and series of Reduced Set SVDD
(RSSVDD) with increasing number of reduced set
SVs.
The outline of the paper is as follows. The method
for quick rejection based on PCA modeling is ex-
plained in next section. The RSSVDD is explained in
section 3. The overall approach of cascaded PCA and
RSVDDs is explained in section 4. Section 5 gives ex-
periments and results of object detection (specifically
on face data). In section 6 we draw some conclusions
of this work and plans for future work.
2 PCA MODELING OF OBJECT
CLASS AND THRESHOLDING
STATISTICS
PCA is a versatile data analysis tool. It can be consid-
ered as data modeling tool, the major principal com-
ponents capturing most of the variance in the covari-
ance matrix of data. The rest of the components are
assumed to represent noise in the data. The steps in-
volved in PCA modeling is summarized in below al-
gorithm. PCA based feature extraction has received
considerable attention in computer vision area. In
previous works (Moghaddam and Pentland, 1997) the
image is represented by features in a low dimensional
space spanned by the principal components. These
features are further utilized in classifier. PCA is pre-
dominantly used for extracting features and dimen-
sionality reduction, the modeling perspective is miss-
ing.
PCA is traditionally applied by Chemometrics with
modeling perspective for the purpose of fault detec-
tion. Fault detection using PCA models is normally
accomplished by applying two statistics. The squared
prediction error SPE , which indicates the amount by
which a sample deviates from the model, is defined
EFFICIENT OBJECT DETECTION USING PCA MODELING AND REDUCED SET SVDD
223