L1-ONPP. This article contains the experiments on
synthetic data showing susceptibility of L2-ONPP to-
wards outliers and comparison of the performance of
L2-ONPP and L1-ONPP on data having outliers. The
relationship between PCA and ONPP is established
and proved theoretically, the experiment performed
on synthetic as well as real data supports the claim
that ONPP basis can be obtained using PCA. Experi-
mental results suggest that L1-ONPP outperforms L2-
ONPP while dealing with the outliers.
In the next section, L1-norm based PCA is ex-
plained in detail, following section III establishes
relation between ONPP and PCA. Section IV con-
sists of experimental results followed by conclusion
in Section V.
2 L1-NORM FOR
DIMENSIONALITY
REDUCTION
All conventional Dimensionality reduction tech-
niques employs optimization of cost function in terms
of L2-norms. Conventional ONPP is also based
on L2-norm optimization (Kokiopoulou and Saad,
2007). Although it has been successful in many prob-
lems, it is prone to the presence of outliers because
the effect of the outliers with a large norm is exag-
gerated by the use of the L2-norm. In order to allevi-
ate this problem and achieve robustness, research has
been performed on L1-norm based dimensionality re-
duction techniques. Many works have been done in
PCA based on L1-norm (Ding et al., 2006), (Baccini
et al., 1996), (Ke and Kanade, 2005), (Kwak, 2008).
Not much works has been carried to propose L1-norm
based methods of recently proposed dimensionality
reduction techniques such as LPP and ONPP.
In (Baccini et al., 1996), (Ke and Kanade, 2005),
each component of the error between the original
data point and its projection was assumed to follow a
Laplacian distribution instead of Gaussian and maxi-
mum likelihood estimation was used to formulate L1-
norm PCA (L1- PCA) to the given data. In (Bac-
cini et al., 1996), a heuristic estimation for general
L1 problem was used to obtain a solution of L1-PCA.
While, in (Ke and Kanade, 2005), the weighted me-
dian method and convex programming methods were
proposed for L1-norm PCA. Despite the robustness
of L1-PCA, it has several drawbacks and it is com-
putationally expensive because it is based on linear
or quadratic programming. In (Ding et al., 2006),
R1-PCA was proposed, which combines the merits of
L2-PCA and those of L1-PCA. R1-PCA is rotational
invariant like L2-PCA and it successfully suppresses
the effect of outliers as L1-PCA does. However, these
methods are highly dependent on the dimension d of
a subspace to be found. For example, the projection
vector obtained when d = 1 may not be in a subspace
obtained when d = 2. Moreover, as it is an iterative
algorithm so for a large dimensional input space, it
takes a lot of time to achieve convergence. Let us now
discuss the work on L1-norm based PCA.
2.1 L1-norm PCA
Let X = [x
1
,x
2
,....,x
n
] ∈ R
m×n
be the given data
where m and n denotes dimensions of the original in-
put space and number of data samples, respectively.
Without losing generality, data is assumed to have
zero mean i.e.
¯
x = 0. L2-PCA tries to find a d(<m)
dimensional linear subspace such that the basis vec-
tors capture the direction of maximum variance by
minimizing the error function:
argmaxE(y) = arg max
n
∑
i=1
k y
i
−
¯
y k
2
y
i
= V
T
x
i
argmax E(V) = arg max
n
∑
i=1
k V
T
x
i
− V
T
¯
x k
2
argmax E(V) = arg max
n
∑
i=1
k V
T
x
i
k
2
argmax E(V) = arg max k V
T
X k
2
(1)
subject to V
T
V = I
d
where, V ∈ R
m×d
is the projection matrix whose
columns constitute the bases of the d dimensional lin-
ear subspace.
In (Kwak, 2008), instead of maximizing variance
in original space which is based on the L2-norm, a
method that maximizes the dispersion in L1-norm in
the feature space is presented to achieve robust and
rotational invariant PCA. The approach presented in
(Kwak, 2008) for L1-norm optimization is simple, it-
erative and easy to implement. It is also proven to find
a locally maximal solution. Maximizing dispersion
using L1-norm in the feature space can be presented
as
argmax E(V) = arg max k V
T
X k
1
(2)
The closed form solution is not possible in L1-
norm maximization problem, thus the basis are sought
ICPRAM 2017 - 6th International Conference on Pattern Recognition Applications and Methods
166