Performance Evaluation of Methods for Correcting Ocular Artifacts
in Electroencephalographic (EEG) Recordings
Murielle Kirkove, Clémentine François, Aurélie Libotte and Jacques G. Verly
Department of Electrical Engineering and Computer Science
University of Liège, Grande Traverse 10, B 4000 Liège, Belgium
Keywords: Electroencephalography, Ocular Artifact, Wavelet Transform, Adaptive Filtering, Blind Source Separation.
Abstract: The presence of ocular artifacts (OA) due to eye movements and eye blinks is a major problem for the
analysis of electroencephalographic (EEG) recordings in most applications. A large variety of methods
(algorithms) exist for detecting or/and correcting OA’s. We identified the most promising methods,
implemented them, and compared their performance for correctly detecting the presence of OA’s. These
methods are based on signal processing “tools” that can be classified into three categories: wavelet
transform, adaptive filtering, and blind source separation. We evaluated the methods using EEG signals
recorded from three healthy persons subjected to a driving task in a driving simulator. We performed a
thorough comparison of the methods in terms of the usual performances measures (sensitivity, specificity,
and ROC curves), using our own manual scoring of the recordings as ground truth. Our results show that
methods based on adaptive filtering such as LMS and RLS appear to be the best to successfully identify
OA’s in EEG recordings.
1 INTRODUCTION
Electroencephalographic (EEG) recordings reflect
the neuronal and electrical activity within the brain.
They are obtained from electrodes placed on the
scalp. They are often contaminated by signals from
other sources, called artifacts. (Artifact is also used
to denote the local deformation of the signal of
interest, here the EEG.) One distinguishes between
physiological artifacts and technical artifacts. The
most frequent physiological artifacts are due to the
activity of the eyes, the heart, and the muscles. The
most common physiological artifacts are the ocular
artifacts (OA’s), due to the movements of the
eyeballs and eyelids. Technical artifacts are mostly
due to electrode placement problems and body
movements.
All artifacts result in an EEG recording that may
be quite different, generally locally, from the true
underlying EEG signal reflecting the brain activity.
It is thus critical to do something about OA’s.
The three usual ways of dealing with OA’s are
prevention, rejection, and removal. Prevention
consists in reducing the occurrences of OA’s by
giving proper instructions to patients. However,
some OA’s are involuntary and unavoidable.
Rejection consists in rejecting the epochs affected by
OA’s. Of course, rejection implies that the OA’s be
first detected. Although simple, rejection has the
major drawback of dropping a significant amount of
valuable data. Removal consists in removing as best
as possible the OA’s to produce a signal that is as
close as possible to the true, underlying EEG signal.
Removal may require that the OA’s be first detected.
Since removing the OA’s corrects the signals, the
term “correction” can also be used in place of the
term “removal”. Any correction method can be
turned into a detection method by thresholding the
difference between the raw signal and the cleaned
one.
When dealing with OA’s, it is useful to record
the electrooculographic signals (EOG), which allow
the observer (and the algorithms) to establish a
“correlation” between the OA’s in the EEG and the
features in the EOG.
Our interest in the handling of OA’s arose from
the study of drowsiness for subjects actively
involved in a task, such as driving. Indeed, until they
fall asleep, these subjects have their eyes mostly
open. Therefore, the EEG signals recorded for
studying the evolution of drowsiness are affected by
OA’s due to eye movements and eye blinks. This
126
Kirkove M., François C., Libotte A. and G. Verly J..
Performance Evaluation of Methods for Correcting Ocular Artifacts in Electroencephalographic (EEG) Recordings.
DOI: 10.5220/0004199101260132
In Proceedings of the International Conference on Bio-inspired Systems and Signal Processing (BIOSIGNALS-2013), pages 126-132
ISBN: 978-989-8565-36-5
Copyright
c
2013 SCITEPRESS (Science and Technology Publications, Lda.)
should be contrasted with the study of sleep, where
subjects have their eyes closed. (However, note that
the eyes and the eyelids can move even when the
eyes are closed.)
Several methods have been proposed in the
literature for cleaning EEG’s from OA’s.
Comprehensive reviews are found in (Croft and
Barry 2000) and (Kandaswamy et al. 2005).
However, we have not found any published paper
comparing a significant number of the proposed
methods in terms of a common performance
measure. The present paper performs such a
comparison.
2 MATERIAL AND METHODS
2.1 Data Recordings
We acquired data at the “Centre d’Etudes des
Troubles de l’Eveil et du Sommeil” (CETES) of the
University Hospital of Liège in the context of the
study of driver drowsiness. Subjects were presented
with a driving task in a simulator. We recorded the
following polysomnographic (PSG) signals: EEG
(for electrodes Fz, Cz, Pz, C3, C4, A1, A2), EOG,
and EMG. The subjects received the instruction to
drive at a constant speed of 80 km/h on a one-way
road, where there were no other vehicles. This task
lasted about two hours. The PSG signals were
recorded with an Embla system at a sampling rate of
500 Hz. They were partitioned into butting (and thus
non-overlapping) epochs of 1024 samples. The
methods described below, except the last one, were
successively applied to each of these epochs. The
last method was applied on one whole EEG
recording.
2.2 Methods Compared
We identified 12 potentially useful methods in the
literature. We organized these methods according to
the seven signal processing “tools” they use (DWT,
SWT, LMS, RLS, H
-TV, ICA, SOBI), which we
further organized into three broad categories (of
tools), i.e. wavelet transform (WT), adaptive
filtering (AF), and blind source separation (BSS)
tools. The abbreviations are spelled out below. Table
1 shows the tools used by the 12 methods. For
example, Method 4 uses both the SWT and LMS
tools.
Table 1 shows that Methods 1 and 2 use only
WT tools, that Methods 3, 5, and 7 use only AF
tools, and that all BSS tools are used in combination
with WT tools. Methods 4, 6, and 8 - 12 use two
tools, each from a different category.
Table 1: Methods compared, and the “tools” they use.
Methods
Tools
WT AF BSS
DWT
SWT
LMS
RLS
H
-TV
ICA
SOBI
1
2
3
4
5
6
7
8
9
10
11
12
We now successively consider the broad
categories (WT, AF, BSS) of tools, and, for each, we
provide the description of the methods that use these
tools. These descriptions generally do not refer
explicitly to the method indices of Table 1.
2.2.1 Wavelet Transform (WT) Tools
The wavelet transform (WT) (Mallat 1999) is one of
the leading techniques for analyzing non-stationary
signals like EEG’s. The major asset of wavelet
analysis is its capability to decompose waveforms
into components that are well localized in time and
in frequency (or, equivalently, in scale).
The continuous WT (CWT) constructs a
“family” of wavelets by scaling and translating a
function called the mother wavelet.
The discrete WT (DWT) results from the
discretization of the CWT on a dyadic grid.
Translation invariance is important in many
applications such as change detection and denoising.
The stationary WT (SWT) is a WT algorithm
designed to overcome the lack of translation
invariance of the DWT (Nason and Silverman
1995). Translation invariance is achieved by
removing the down-samplers and up-samplers
present in the DWT.
2.2.1.1 Detection of OA’s with DWT
Krishnaveni et al. applied a wavelet-based
thresholding algorithm to identify zones of OA’s
(Krishnaveni et al. 2006). They based their method
on (Venkataramanan et al. 2004), i.e. they used the
Haar wavelet to precisely detect the moment when
PerformanceEvaluationofMethodsforCorrectingOcularArtifactsinElectroencephalographic(EEG)Recordings
127
the state of the eye changes from open to closed and
vice versa.
The technique is based on the difference in
frequency contents between the EEG recording ([0-
20] Hz) and the OA signals ([0-16] Hz). The raw
EEG signal is decomposed with the Haar DWT. The
detail wavelet coefficients (WCf’s) are then
cancelled and this results in a step function with a
falling edge indicating a change from open to closed
eyes, or with a rising edge indicating a change from
closed to open eyes.
The edges of the approximation are classified
into artifact or non-artifact edges according to their
relative amplitude.
2.2.1.2 Correction of OA’s with SWT
Krishnaveni et al. consider the OA’s as a noise part
of the EEG recording, and they apply a wavelet-
based thresholding algorithm to remove them
(Krishnaveni et al. 2006). Soft-thresholding is the
most popular thresholding technique, and it has been
theoretically justified by Donoho and Johnstone.
These last authors suggest to choose optimal
thresholds by minimizing the Stein Unbiased Risk
Estimator (SURE) at each decomposition level
(Donoho and Johnstone 1995).
Soft-thresholding functions are continuous with
discontinuous derivatives. However, continuous
derivatives of first and higher orders are often
desired for optimization problems. A new class of
soft-like-thresholding functions with continuous
derivatives was proposed (Xiao-Ping and Desai
1998). The method consists in applying the SWT
with Coiflet3 as mother wavelet for levels 3 to 6,
selecting the optimal threshold for each level by
minimizing the SURE, applying soft-like-
thresholding, and applying the inverse SWT.
Since OA’s occupy the lower frequency band
([0-16] Hz) of the typical EEG, the threshold
selection and the thresholding are only performed on
the decomposition levels 3 to 6. Coiflet3 is chosen
as the mother wavelet since it resembles the shape of
an eye-blink OA. This implies that large WCf’s be
generated in OA zones and that small WCf’s be
generated in areas corresponding to non-OA zones.
Reducing the amplitude range of the large
coefficients should then result in the removal or
reduction of the OA’s.
2.2.2 Adaptive Filtering (AF) Tools
Adaptive filters (AF’s) belong to the category of
optimal filters (Klados et al. 2009; Correa and Leber
2011): they adapt their coefficients to the
disturbance in the input signal, and subtract the
result from the input signal. The adaptive process
involves an optimization controlled by the error
signal between the input signal and the filter-output
signal. We tested three AF algorithms: (1) the least
mean square (LMS) algorithm, which minimizes the
mean squared error, (2) the recursive least squares
(RLS) algorithm, which minimizes a cost function
that is a linear combination of squared errors, and
(3) the H
Time-Varying (H
-TV) algorithm, which
minimizes the infinite norm of a linear combination
of squared errors (Puthusserypady and Ratnarajah
2006).
We implemented these three AF’s as presented
in Tables 1-3 of (Klados et al. 2009).
The application of AF’s can be combined with
the use of the SWT (Kumar et al. 2008). The
procedure consists in applying the SWT with the
Symlet3 mother wavelet up to eight levels, applying
the AF to the WCf’s, and applying the inverse SWT
to the error signal.
2.2.3 Blind Source Separation (BSS) Tools
Blind source separation (BSS) techniques are based
on a linear decomposition of the measured signals
into sources, also called components. Applied to
EEG and EOG recordings, these methods segregate
the artifactual activities into separate sources.
Therefore, the reconstruction of the recorded EEG
with these sources removed leads to a reduction of
OA’s. These techniques can be used with several
EEG channels.
The most common BSS methods are the
independent component analysis (ICA) and the
second-order blind identification (SOBI).
ICA is a statistical technique in which measured
signals are linearly transformed into sources that are
maximally independent from each other (Hyvärinen
and Oja 2000).
Numerous ICA algorithms exist. FastICA and
Infomax are the most popular ones. Infomax (Bell
and Sejnowski 1995) is effective in separating
sources that have super-Gaussian probability density
functions, but it fails to separate sources that have
negative Kurtosis. Unless explicitly stated otherwise,
we have used FastICA.
SOBI (Belouchrani et al. 2002) divides a set of
measured signals into sources by exploiting the
possible time coherence between the sources. It
minimizes the cross-correlations between each
component and other components shifted in time,
across a set of time delays.
BIOSIGNALS2013-InternationalConferenceonBio-inspiredSystemsandSignalProcessing
128
2.2.3.1 Correction of OA’s by Combining a BSS
Tool with High-order Statistics
The two methods we describe here are based on the
same scheme. The first one is that of (Ghandeharion
and Erfanian 2010) where the BSS tool is ICA. The
second one uses SOBI instead of ICA.
The methods first decompose the EEG and EOG
recordings (two channels) into sources, by applying
either of the BSS transforms. They then identify the
artifactual source (in the way described below) and
remove it. They finally produce the output signal by
applying the appropriate inverse transform to the
remaining (non-artifactual) sources.
The artifactual source is identified as follows.
For each of the above sources, one computes seven
statistical measures, with four directly on the sources
and one on each set of SWT coefficients for levels 3
to 5. The four measures on each source are (1) the
mutual information, (2) the projection strength, (3)
the correlation, and (4) the kurtosis. The measure on
the selected SWT coefficients is the kurtosis. One
then flags for each measure the couple
source/measure with maximum measure values. Any
source with four flags is deemed to be artifactual.
2.2.3.2 Correction of OA’s by Simultaneously
using ICA and DWT
The main drawback of ICA is that the number of
measured signals must be larger than the number of
sources for correctly decomposing the different
types of artifacts. Therefore, ICA has difficulty in
separating the OA sources from the true PSG
sources. Moreover, the spectrum of some OA’s is
located in a narrow frequency band. Since ICA
works in the time domain and since DWT has a
good frequency resolution, the combination of ICA
and DWT is particularly well adapted.
Automatic wavelet independent component
analysis (AWICA) (Mammone et al. 2012)
combines DWT and ICA on multichannel PSG
recordings to improve the performance of source
separation. This method consists in the six following
phases executed on each epoch:
Each recorded PSG channel is decomposed by
DWT with the Daubechies4 mother wavelet.
The four frequency bands characterizing the
brain activity are represented by the wavelet
components (WC’s).
An automatic procedure is applied to measure
the level of “artifactuality” of the WC’s. Two
measures are used to this end: the kurtosis
(Kt) and the Renyi’s entropy (ReE). This last
measure allows one to quantify the
randomness. The Kt and the ReE of the WC’s
are computed and then normalized to zero
mean and unit variance with respect to every
WC. If one of these normalized measures
exceeds a fixed threshold, the WC is marked
as being a critical wavelet component (CWC).
ICA is applied to all CWC’s. The critical
wavelet independent components (CWIC’s)
are so extracted.
The set of CWIC’s is partitioned into non-
overlapping windows. If the Kt or the ReE of
one CWIC exceeds a fixed threshold in more
than 20% of the non-overlapping windows, it
is marked and rejected.
An inverse ICA is applied so that artifact-free
WC’s are recovered.
The inverse DWT is applied to reconstruct the
cleaned EEG signals (channels).
2.2.3.3 Correction of OAs by Combining ICA
and Wavelet Denoising in a Robust Way
The method called Robust Artifact Removal (RAR)
is presented in (Zima et al. 2012) as a method for
removing short-duration, high-amplitude artifacts
from long-term neonatal EEG recordings.
It consists in three major phases: (1) partitioning
the EEG recording (one channel) into contiguous
epochs in three different ways; (2) independent
processing (as described below) of each partition;
(3) combining the three artifact-free reconstructions
for obtaining a reconstruction that is freer of
artifacts.
Phase (2) consists of five processing steps: (1)
ICA, (2) artifact detection, (3) wavelet denoising of
artifact sources by using DWT and soft-
thresholding, (4) replacement of the artifact sources
by their noise part, estimated in previous step, (5)
inverse ICA.
For ICA, we use the implementation of
(Tichavský and Yeredor 2009) of the algorithm
BGSEP (Pham and Cardoso 2001). This algorithm is
based on second-order statistics as in the SOBI
algorithm, but uses the non-stationarity of the
measured signals.
The identification of high-amplitude artifact
sources is based on their duration, which is short in
comparison to the partition length. The authors call
such sources “sparse” in the time domain. They
define the sparsity of a signal as a value proportional
to its maximum amplitude and logarithmically
proportional to the inverse of its median. A source
with sparsity exceeding a fixed threshold is marked
as an artifact.
The specific combination of the three
reconstructions, called “adaptive folding”, allows
PerformanceEvaluationofMethodsforCorrectingOcularArtifactsinElectroencephalographic(EEG)Recordings
129
one to reduce the possible remaining artifacts by
averaging, epoch-by-epoch, the reconstructions
containing the fewest artifacts. The presence, or not,
of artifacts is decided based upon the differences and
the maximum absolute values of the reconstructions.
2.3 Method of Performance Evaluation
For memory, Method 1 is a detection method, and
all others are correction methods. No obvious
evaluation method is available for estimating the
performance of a correction method. Indeed, we do
not have an accurate means of measuring the true
EEG signal. For this reason and for the purpose of
evaluating the performance of the methods, we
decided to “turn” the 11 correction methods into
detection methods. This transformation is done by
subtracting the corrected EEG signal from the raw
EEG signal and thresholding the result.
To quantify the detection performance of the 12
methods, we defined the ground truth by manually
segmenting many 2s epochs of 1024 samples each
into true OA zones and true non-OA zones. For this,
we used a tool included in the Matlab toolbox
Fieldtrip (Oostenveld et al. 2011).
The top part of Fig. 1 illustrates the “true”
segmentation of, say, one epoch performed manually
by an observer into OA zones and non-OA (OA
)
zones. The bottom part illustrates the corresponding
“computed” segmentation performed automatically
by some method. The boundaries of the true and
computed zones define intervals that can each be
labeled as true positive (tp), true negative (tn), false
positive (fp), and false negative (fn). We transform
this labeling into the customary tp, tn, fp, and fn
numbers by simply adding the lengths of the
intervals that have the same, corresponding label.
These four numbers define a confusion matrix.
However, the fundamental measures of performance
that we use to compare the 12 methods are:
The tp rate, which is the ratio between tp and
the number of positives, i.e. tp + fp;
The fp rate, which is the ratio between fp and
the number of negatives, i.e. tn + fn.
The tp rate is also called the sensitivity and “1- the
fp rate” the specificity. We use the common receiver
operating characteristic (ROC) curves for
representing these measures.
Figure 1: Evaluation: segmentations into true (top) and
computed (bottom) OA zones and non-OA (OA
) zones.
3 RESULTS
Figure 2 shows the results of the 12 methods on one
epoch of 1024 samples from one EEG recording.
Figure 2: Results of the 12 methods on one epoch of 1024
samples from one EEG recording. The thin (thick) lines
show the raw (cleaned) EEG signals.
Method 1 detects correctly the OA zone.
Method 2 is not capable of correcting EEG
signals for OA’s. This observation is in
contradiction with the results presented in
(Krishnaveni et al. 2006). Our conclusion is that this
method should not be expected to work because the
BIOSIGNALS2013-InternationalConferenceonBio-inspiredSystemsandSignalProcessing
130
method is one of denoising, and therefore applicable
only to white noise. However, OA’s cannot be
considered to be white noise! Therefore, we decided
to ignore this method in our performance evaluation.
The results of the LMS and RLS methods (Methods
3 and 6) are very similar: the spike due to the OA is
weakened. In the results of H
-TV (Methods 7 and
8), the OA spike is clearly reduced, but useful data is
also perturbed. The results of the BSS methods
(Methods 9 to 12) are quite similar: the OA peak has
disappeared.
Figure 3 shows the ROC curves of the 11
retained methods on the same EEG recording (i.e.
with Method 2 ignored). The four best ROC curves
are given by the LMS and RLS methods (Methods 3
to 6).
Figure 3: ROC curves for the 11 retained methods.
The sensitivity and the specificity have
antagonistic behaviors. Therefore, another way of
comparing the performances of the methods is to
consider the sum of the sensitivity and the
specificity. Then, the larger the sum is, the better the
performance is. Table 2 lists the sensitivity, the
specificity, and their sum. The rows of the four best
methods are shown in gray with the performance
increasing from light to dark gray.
4 DISCUSSION
Method 1, which is a detection method based on a
thresholding of wavelet approximation coefficients,
does not seem to correctly identify all OA zones in
the considered EEG recording (in comparison to the
reference). Indeed, Method 1 has one of the lowest-
positioned ROC curve in Figure 3. In addition, we
see from Table 2 that the sensitivity barely reaches
0.275. This means that only 27.5% of OA’s are
correctly detected. However, the method has a high
specificity.
Figure 2 shows that all other methods – which
are correction methods –, except for Method 2, are
able to remove a substantial amount of OA from the
EEG recording. In each graph of this figure (except
for that of Method 2), one can observe that the spike
due to the OA is clearly reduced. However, as
indicated earlier, it is difficult to evaluate the
performance of the correction methods because we
cannot measure directly the activity of the brain and
of the eyes separately. We will thus discuss the
results of these methods of correction in terms of
their ability to identify correctly the OA zones in the
EEG recording.
In general, methods based on adaptive filtering
show better results than those based on BSS
methods. Indeed, Table 2 indicates that the sum of
the values of sensitivity and specificity is higher for
Methods 3 to 8 than for Methods 9 to 12. This is
confirmed by the ROC curves shown in Figure 3,
where one can observe that the curves for Methods 3
to 8 are located closer to the upper-left corner than
those for Methods 9 to 12. Table 2 and Figure 3
indicate that Methods 7 and 8 can correctly identify
the OA zones. However, visual inspection of the
corresponding graphs of Figure 2 reveals that these
methods also remove a lot of useful data. Methods 3
to 6 (LMS- and RLS-based algorithms) are thus the
four best methods to successfully identify OA zones
in the EEG recording.
From Table 2 and Figure 3, one can also
conclude that combining the LMS and RLS
algorithms with the SWT does not improve the
results as compared to using LMS and RLS alone.
Table 2: Best compromise in sensitivity and specificity for
the 11 retained methods.
Methods
S
ensitivi
t
eci
icit
S
ens.+ s
p
ec.
Method 1 0.275 0.985 1.260
Method 3 0.791 0.768 1.559
Method 4 0.717 0.813 1.530
Method 5 0.642 0.858 1.500
Method 6 0.647 0.847 1.494
Method 7 0.587 0.882 1.469
Method 8 0.578 0.882 1.460
Method 9 0.639 0.775 1.414
Method 10 0.641 0.828 1.469
Method 11 0.123 0.956 1.079
Method 12 0.501 0.779 1.280
5 CONCLUSIONS
Ocular artifacts (OA’s) are often present in EEG
recordings. They mask the true, underlying EEG
signal. As a result, the OA’s make the analysis of
EEG recordings more difficult and, more
PerformanceEvaluationofMethodsforCorrectingOcularArtifactsinElectroencephalographic(EEG)Recordings
131
importantly, they can lead to incorrect analysis and
wrong conclusions. To avoid losing valuable data, it
is critical to develop robust methods for cleaning out
EEG recordings from OA’s. For the purpose of
evaluating the state of the art in the detection and
elimination/reduction of OA’s, we implemented 12
promising methods found in the literature. We
evaluated the performance of all the methods in
terms of their ability to correctly detect OA zones in
EEG recordings, as compared to a ground truth
established visually. Results suggest that methods
based on adaptive filtering such as LMS and RLS, as
well as their combination with the SWT are the best
methods to successfully detect OA zones in EEG
recordings. These methods have higher values of
sensitivity and specificity, and better ROC curves,
than the other correction methods.
ACKNOWLEDGEMENTS
The authors thank IFSTTAR for making available
one of their driving simulator software, and the
“Centre d’Etudes des Troubles de l’Eveil et du
Sommeil” (CETES) for making available their
facilities and equipments.
REFERENCES
Bell, A., Sejnowski, T., 1995. An information-
maximization approach to blind separation and blind
deconvolution. In Neural Computation, 7(6):1129-
1159.
Belouchrani, A., Abed-Meraim, K., Cardoso J. F., 2002. A
blind source separation technique using second-order
statistics. In Signal Processing, IEEE, 45(2): 434-444.
Correa, M. A. G, Leber, E. L., 2011. Noise removal from
EEG signals in polisomnographic records applying
adaptive filters in cascade. In Adaptive Filtering
Applications. L.G. (Ed.).
Croft, R. J., Barry, R. J., 2000. Removal of ocular artifact
from the EEG : a review. In Clinical Physiology,
30(1): 5-19.
Donoho, D. L., Johnstone, I. M., 1995. Adapting to
unknown smoothness via wavelet shrinkage. In
Journal of the American Statistical Association,
90(432): 1200-1224.
Ghandeharion, H., Erfanian, A., 2010. A fully automatic
ocular artifact suppression from EEG data using
higher order statistics: improved performance by
wavelet analysis. In Medical Engineering and Physics,
32(7): 720-729.
Hyvärinen, A., Oja, E., 2000. Independent component
analysis: algorithms and applications. In Neural
Networks, 13:411-430.
Kandaswamy, A., Krishnaveni, V., Jayaraman S.,
Malmurugan N., Ramadoss K., 2005. Removal of
ocular artifacts from EEG: a survey. In IETE Journal
of Research, 51(2): 10.
Klados, M. A., Papadelis, C., Lythari, C., Bamidis P. D.,
2009. The removal of ocular artifacts from EEG
signals: a comparison of performances for different
methods. In 4
th
European Conference of the
International Federation for Medical and Biological
Engineering. J. Sloten, P. Verdonck, M. Nyssen and J.
Haueisen, Springer Berlin Heidelberg, 22:1259-1263.
Krishnaveni, V., Jayaraman, S., Anitha, L., Ramadoss, K.,
2006. Removal of ocular artifacts from EEG using
adaptive thresholding of wavelet coefficients. In
Journal of Neural Engineering, 3(4):338-346.
Krishnaveni, V., Jayaraman, S., Aravind, S.,
Hariharasudhan, V., Ramadoss, K., 2006. Automatic
identification and removal of ocular artifacts from
EEG using wavelet transform. In Measurement
Science Review, volume 6, section 2, no. 4.
Kumar, P. S., Arumuganathan, R., Sivakumar, K., Vimal,
C., 2008. Removal of artifacts from EEG signals using
adaptive filter through wavelet transform signal
processing. In the 9
th
IEEE Int’l Conference on Signal
Processing.
Mallat, S., 1999. A wavelet tour of signal processing,
(second edition). Academic Press.
Mammone, N., La Foresta, F., Morabito, F. C., 2012.
Automatic artifact rejection from multichannel scalp
EEG by wavelet ICA. In Sensors Journal, IEEE,
12(3):533-542.
Nason, G. P., Silverman, B. W., 1995. The stationary
wavelet transform and some statistical applications.
Oostenveld, R., Fries, P., Maris, E., Schoffelen, J.M.,
2011. Fieldtrip: open source software for advanced
analysis of MEG, EEG, and invasive
electrophysiological data. In Computational
Intelligence and Neuroscience.
Pham, D.T., Cardoso, J.F., 2001. Blind separation of
instantaneous mixtures of non stationary sources. In
IEEE Transactions on Signal Processing, 49: 1837-
1848.
Puthusserypady, S., Ratnarajah, T., 2006. Robust adaptive
techniques for minimization of EOG artefacts from
EEG signals. In Signal Processing, 86(9): 2351-2363
Tichavský, P., Yeredor, A., 2009. Fast approximate joint
diagonalization incorporating weight matrices. In
IEEE Transactions on Signal Processing, 57: 878-
891.
Venkataramanan, S., Kalpakam, N. V., Sahambi J.S.,
2004. A novel wavelet based technique for detection
and de-noising of ocular artifact in normal and
epileptic electroencephalogram. In the 6
th
Nordic
Signal Processing Symposium 2004.
Xiao-Ping, Z., Desai, M. D., 1998. Adaptive denoising
based on SURE risk. In Signal Processing Letters,
IEEE, 5(10):265-267.
Zima, M., Tichavský, P., Paul, K., Krajča, V., 2012.
Robust removal of short-duration artifacts in long
neonatal EEG recordings using wavelet-enhanced ICA
and adaptive combining of tentative reconstructions.
In Physiological Measurements, 33(8):39-49.
BIOSIGNALS2013-InternationalConferenceonBio-inspiredSystemsandSignalProcessing
132