best performance can by achieved by using the
higher-order statistics (HOS) approach. In many
applications, separation algorithms are combinations
of two approaches: HOS and SOS. The second-order
statistics (SOS) are useful for blind signal
separation, when the source signals are statistically
non-stationary.
The fundamental restriction in ICA methods is
that independent components must be non-gaussian
for ICA to be as much as possible. The classical
measure of nongaussianity is kurtosis or the fourth-
order cumulant (Cichocki, Amari, 2002). A second
measure is given by negentropy, which is based on
the information - theoretic quantity of (differential)
entropy. Next approach for ICA separation is based
on information theory - minimization of mutual
information (Van Hulle, 2008).
Usually, algorithms for Independent Component
Analysis can be divided into two categories
(Cichocki, Amari, 2002). In the first category
algorithms rely on batch computations minimizing
or maximizing some relevant criterion functions, for
example: FOBI (Fourth Order Blind Identification),
FOBI-E (Fourth Order Blind Identification with
Transformation matrix E), JADE (Joint
Approximate Diagonalization of Eigen matrices),
JADE TD (Joint Approximate Diagonalization of
Eigen matrices with Time Delays), FPICA (Fixed-
Point ICA). It was a problem with these algorithms,
because they require very complex matrix or
tensorial operations.
In the second category adaptive algorithms often
based on stochastic gradient methods, for example:
NG-FICA (Natural Gradient - Flexible ICA),
ERICA (Equivariant Robust ICA - based on
Cumulants), SANG (Self Adaptive Natural Gradient
algorithm with nonholonomic constraints). The main
problem of these algorithms is the slow convergence
and dependence on the correct choice of the learning
rate parameters (in neutral networks). It has been
proven (Cichocki, Amari, 2002) that the Natural
Gradient algorithms improves greatly the learning
efficiency in blind separation process.
Generally, the adaptive learning algorithms can
by written in the general form by using estimating
functions (Cichocki, Amari, 2002):
ΔWWW +=+ )()1( tt
(3)
where:
)(tW
- is a separating matrix;
)())(( tt
fy
WRIΔW −=
for that:
)(t
μ
- is a learning
rate at time,
I - is an identity matrix and
fy
R - is a
covariance matrix.
Many methods have been proposed to remove
eye blinks and muscle activity from EEG recordings
(Rangayyan, 2002; Sanei, Chambers, 2007).
Applications of ICA approach to EEG data have
concentrated on source localization and on artifacts
removal. Usually, the EEG recordings can be first
decomposed into useful signal and undesired
subspace of components using standard techniques
like local and robust PCA, SVD or nonlinear
adaptive filtering (Rangayyan, 2002). In the
following step, the ICA algorithms decomposed the
observed signals (signal subspace) into independent
components. It is worth to noting, that some useful
sources are not necessarily statistically independent.
Therefore, the perfect separation of primary sources
by using any ICA procedure cannot be achieved
(Roberts, Everson, 2001). However, in this
experiment the separation of the EEGs is not
important, but only the removal of independent
undesired components.
3 METHODS AND MATERIALS
The performance of three chosen adaptive
algorithms presented in this paper have been
implemented in MATLAB software.
Figure 2: Artifacts: a) eye blinks (1÷2,5) Hz; b) muscle
(20
÷60) Hz.
Figure 3: An example of EEG data with eye blinks and
muscle artifacts.
The EEG signals have been prepared using
BIOSIG (http://biosig.sourceforge.net/index.html),
BIOSIGNALS 2009 - International Conference on Bio-inspired Systems and Signal Processing
540