Neural Approaches to Image Compression/
Decompression Using PCA based Learning Algorithms
Luminita State
1
, Catalina Cocianu
2
, Panayiotis Vlamos
3
and Doru Constantin
1
1
Department of Computer Science, University of Pitesti, Pitesti, Romania
2
Department of Computer Science, Academy of Economic Studies, Bucharest, Romania
3
Department of Computer Science, Ionian University, Corfu, Greece
Abstract. Principal Component Analysis is a well-known statistical method for
feature extraction, data compression and multivariate data projection. Aiming to
obtain a guideline for choosing a proper method for a specific application we
developed a series of simulations on some the most currently used PCA
algorithms as GHA, Sanger variant of GHA and APEX. The paper reports the
conclusions experimentally derived on the convergence rates and their
corresponding efficiency for specific image processing tasks.
1 Introduction
Principal component analysis allows the identification of a linear transform such that
the axes of the resulted coordinate system correspond to the largest variability of the
signal. The signal features corresponding to the new coordinate system are
uncorrelated. One of the most frequently used method in the study of convergence
properties corresponding to different stochastic learning PCA algorithms basically
proceeds by reducing the problem to the analysis of asymptotic stability of the
trajectories of a dynamic system whose evolution is described in terms of an ODE [5].
The Generalized Hebbian Algorithm (GHA) extends the Oja’s learning rule for
learning the first principal components. Aiming to obtain a guideline for choosing a
proper method for a specific application we developed a series of simulations on some
the most currently used PCA algorithms as GHA, Sanger variant of GHA and APEX.
2 Hebbian Learning in Feed-forward Architectures
The input signal is modeled as a wide-sense-stationary n-dimensional process
()()
0≥t,tX of mean 0 and covariance matrix