Rehabilitation through Brain Computer Interfaces
Classification and Feedback Study
Arnau Espinosa
1
, Rupert Ortner
1
, Danut Irimia
2
and Christoph Guger
1
1
g.tec Guger Technologies OG, Sierningstrasse 14, Schiedlberg, Austria
2
Faculty of Electrical Engineering, Technical University of Iasi, Iasi, Romania
Keywords: BCI, EEG, Stroke Rehabilitation, Feedback, 3D Virtual Reality.
Abstract: A Brain-Computer Interface (BCI) is a tool for reading and interpreting signals recorded directly from the
user’s brain. Most brain-computer interfaces (BCI) are based on one of three types of electroencephalogram
(EEG) signals: P300s, steady-state visually evoked potentials (SSVEPs), and event-related
desynchronization (ERD). EEG is typically recorded non-invasively using active or passive electrodes
mounted on the human scalp. In recent years, a variety of different BCIs for communication and control
applications were developed. A quite new and promising idea is to utilize BCIs as a tool for stroke
rehabilitation. The BCI detects the user's movement intention and provides online feedback to train the
affected parts of the body to restore effective movement. This publication tries to optimize current BCI-
strategies for stroke rehabilitation using immersive 3-D virtual reality feedback (VRFB). Other work has
continued to show that higher density electrode systems can reveal subtleties of brain dynamics that are not
obvious with fewer electrodes. Hence, we used a larger electrode montage than typical BCI studies.
1 INTRODUCTION
Brain - Computer Interfaces (BCI) allow new
communication channels using different mental
states. In a typical BCI, a user performs voluntary
mental tasks. Each task produces distinct patterns of
electrical activity in the electroencephalogram
(EEG). Using monitoring systems and on-line signal
processing software, automates tools can identify
which mental tasks a user performed at specific
times. Most modern BCIs rely in one of three types
of mental tasks, which are associated with different
types of brain activity:
Imagined movement, which produces event-
related desynchronization (ERD) dominant over
central electrode sites (Guger, 2003 and Neuper,
2009);
Attention to oscillating visual stimuli, which
produces steady-state visually evoked potentials
(SSVEP) dominant over occipital sites (Friman,
2007);
Attention to transient stimuli, which produces the
P300 event-related potential dominant over parietal
and occipital sites (Guger, 2009 and Townsend,
2012).
In the last few years, several publications provide
evidence that using MI-based BCIs can induce
neural plasticity and thus serve as an important tool
to enhance motor rehabilitation for stroke patients.
Ang et al. (Ang, 2009) reported higher 2-month
post-rehabilitation gain for patients using a BCI-
driven robotic rehabilitation tool compared to a
control group, but without significant results.
Recently, Shindo et al. (Shindo, 2011) tested the
effectiveness of neurorehabilitation training when
using a BCI for controlling online feedback from a
hand-orthosis. Also here, the conclusion promises
increased rehabilitation results. Grosse-Wentrup et
al. deliver a good overview about the state of the art
on this research topic (Grosse, 2011).
On MI-based BCI, neurofeedback plays a crucial
role to optimize the user’s performance (Neuper,
2010). Feedback must reflect the user’s task in an
appropriate way: for example, when using the BCI
for motor rehabilitation, the feedback should be
similar to the motor activity.
In this study, two different feedback strategies
that can be used for a rehabilitation task are
evaluated. During two sessions, the participants
were asked to perform MI of either the right or left
hand (in random order) as dictated by a visual
paradigm. The first feedback strategy shows the
692
Espinosa A., Ortner R., Irimia D. and Guger C..
Rehabilitation through Brain Computer Interfaces - Classification and Feedback Study.
DOI: 10.5220/0004183906920697
In Proceedings of the 4th International Joint Conference on Computational Intelligence (SSCN-2012), pages 692-697
ISBN: 978-989-8565-33-4
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
hands of an avatar in a 3-D Virtual Reality Feedback
environment (VRFB, see section II for more details).
Either the left or the right hand of the avatar moves
according to the MI. For comparison, a popular
strategy (bFB, e.g. in Guger, 2003) was used. Here
the feedback entails the movement of a bar on the
computer screen. This bar always starts in the
middle of the screen and extends either to the left or
right side of the screen, according to the classified
motor imagination. Nine subjects did recordings
with 63 EEG channels. Two subjects did the same
session with using 63 and 27 channels (See Figs. 1
and 2). For these two persons using 63 and 27 we
evaluated the difference in accuracy.
Recently, Neuper and colleagues compared
different BCI feedback strategies (Neuper, 2011).
There, the realistic feedback consisted of a hand
grasping a target, and the bar feedback was similar
to the present study. While Neuper used only three
bipolar channels for the classification, the present
study used a common spatial patterns (CSP)
approach that takes advantage of the high number of
EEG channels.
2 METHODS
2.1 Common Spatial Patterns
The method of CSP is known for discrimination of
two motor imagery tasks (Blankertz, 2008) and was
first used for extracting abnormal components from
the clinical EEG (Koles, 1991). By applying the
simultaneous diagonalization of two covariance
matrices, one is able to construct new time series
that maximize the variance for one task, while
minimizing it for the other one.
Given N channels of EEG for each left and right
trial, the CSP method gives an N x N projection
matrix. This matrix is a set of subject-dependent
spatial patterns, which reflect the specific activation
of cortical areas during hand movement imagination.
With the projection matrix W, the decomposition of
a trial X is described by:
WXZ
(1)
This transformation projects the variance of X
onto the rows of Z and results in N new time series.
The columns of W
-1
are a set of CSPs and can be
considered time-invariant EEG source distributions.
Due to the definition of W, the variance for a left
movement imagination is largest in the first row of Z
and decreases with the increasing number of the
subsequent rows. The opposite occurs for a trial with
right motor imagery. For classification of the left
and right trials, the variances have to be extracted as
reliable features of the newly designed N time series.
However, it is not necessary to calculate the
variances of all N time series. The method provides
a dimensionality reduction of the EEG. Mueller-
Gerking and colleagues (
Mueller, 1999) showed that
the optimal number of common spatial patterns is
four. Following their results, after building the
projection matrix W from an artifact corrected
training set X
T
, only the first and last two rows (p=4)
of W are used to process new input data X. Then the
variance (VAR
p
) of the resulting four time series is
calculated for a time window T. After normalizing
and log-transforming, four feature vectors are
obtained.
4
1
log
p
p
p
p
VAR
VAR
f
(2)
With these four features a linear discriminant
analysis (LDA) classification is done to categorize
the movement either as left-hand or right-hand.
2.2 Data Processing
EEG data were recorded over 63 positions (see Fig.
1) or 27 channels (see Fig. 2) of the motor cortex,
using active electrodes (g.LADYbird, g.tec medical
engineering GmbH, Austria). A multichannel EEG-
amplifier was used (g.HIamp, g.tec medical
engineering GmbH) to record the data with a
sampling frequency of 256 Hz. The workflow model
is shown in Fig 3. The sampled data went into a
bandpass filter (Butterworth, 5th order) before the
four spatial filters were applied. The variance was
computed for a moving window of one second.
Normalization is done according to Eq. (2). Finally,
the LDA classification drives the feedback block of
the paradigm.
2.3 Paradigm and Sessions
Before the tests started, the healthy users (all male
between 25 and 30 years old; all right handed) were
trained on motor imagery tasks until their
performance was stable. After that, the two sessions
with different feedback were executed. The
workflow can be seen in the middle of Fig. 3. Each
session consisted of seven runs; each run included
20 trials for left-hand movement and 20 trials for
right-hand movement in a randomized order. The
RehabilitationthroughBrainComputerInterfaces-ClassificationandFeedbackStudy
693
Figure 1: Spatial patterns for S1 during VRFB runs 2, 3, 4
and 5. The upper panel shows the first spatial filter that
sets higher weights to electrodes around the region of C3.
The lower panel is the last spatial pattern that sets more
weight to the region around C4. The single small spots
show the 63 used electrode positions. C3, Cz and C4 are
marked separately.
Figure 2: Spatial patterns for S1, during the same run as in
Fig. 1. This time only 27 channels were used for
computation. The single spots again show the electrode
positions.
first run (run1) was performed without giving any
feedback. The resulting data were visually inspected,
and trials containing artifacts were manually
rejected. These data were used to compute a first set
of spatial filters (CSP1) and a classifier (WV1).
With this first set of spatial filters and classifier,
another four runs (run2, run3, run4, run5) were
performed while giving online feedback to the user.
The merged data of these four runs (run 2, 3, 4 and
5) were used again to set up a second set of spatial
filters (CSP2) and a classifier (WV2) that used a
higher number of trials and was more accurate.
Finally, to test the online accuracy during the
feedback sessions, two more runs (run 6, run 7;
merged data: run 6 and 7) were done.
Each trial lasted eight seconds, between each
trial there was a random intertrial interval between
0.5s and 1.5s. After two seconds, a beep directed the
user to the upcoming cue. The cue-phase, during
which the subject was told to perform either an
imagination of the left or right hand, started at 3s
and stopped at 4.25s. The end of the cue-phase was
marked by a second beep. The feedback-phase
started at 4.25s and lasted until the end of the trial
(8s). The user was asked to perform the MI from the
beginning of the cue-phase until the end of the
feedback-phase.
Comparing the presented cue and the classified
movement, an error rate can be calculated. The error
rate, as displayed in Table 1, is calculated by
applying CSP2 and WV2 onto the merged datasets
run 6 and 7. The classifier and the errors are
calculated every half a second. For every such
calculation, the classifier was applied to the features
and the classification result compared to the cue,
resulting in the error rate that was averaged over all
trials.
2.4 Feedback Strategies
Feedback strategy number one (bar feedback; bFB)
is quite common for motor imagery tasks. A bar
begins in the middle of the computer screen and
expands either to the left or the right of the screen. If
a left-hand movement is detected, the bar grows to
the left; for a right-hand movement, it extends to the
right side. The length of the bar is proportional to the
classified LDA-distance. During the cue phase, in
addition to the bFB, a red arrow points to the left or
to the right side of the screen, indicating to the user
which MI he or she should perform.
Within the virtual reality feedback (VRFB) task,
a virtual reality research system (g.VRsys, g.tec
medical engineering GmbH, Austria) is used. The
user sits in front of a 3D-PowerWall wearing shutter
glasses. The size of the PowerWall is 3.2m x 2.45m,
and the distance between PowerWall and user is
IJCCI2012-InternationalJointConferenceonComputationalIntelligence
694
Figure 3: Workflow of the model. The biosignal amplifier records the data that is bandpass-filtered between 8 Hz and 30
Hz. Four CSPs are applied, and the variance within a moving window T of one second is computed. The LDA classifier is
then applied to the normalized variances. The output of the classifier drives the feedback block that gives feedback
according to the chosen session.
about 1.5m. The user sees the left and right hands of
an avatar from a subjective point of view (see Fig.
4). The only movement the avatar performs is the
continuous opening and closing of either the left or
the right hand. No modulation of the speed of the
movement is done. During the cue-phase (from
second 3 until second 4.25 of the experiment), the
user needs to know which MI has to be performed.
In the VRFB task, the opening/closing of one of the
hands provides this information. After second 4.25,
a second beep appears, and the observed movement
of the avatar is the feedback to the performed MI.
Figure 4: Virtual reality feedback. During the feedback
session, the fingers of either the left or the right hand close
and open according to the classified movement.
3 RESULTS
Table 1 summarizes the results from the three
subjects. For each session, the averaged error rate
over all trials and over the single time-steps starting
from 3.5s until 8s is shown. These values reflect the
accuracy resulting from applying CSP2 and WV2 to
the data of runs 6 and 7, and show the online
accuracy that the users experienced during these
runs. The number in parentheses shows the
minimum error for the single time-steps. For S1 and
S2 the error rate was recorded twice: once with all
63 channels and again when cutting out 27 channels
(positions are shown in Fig. 2). This reflects of
course only an estimated error rate that the user
would have experienced if only the subset of 27
electrodes would have been used. For S3 only the 27
channels were recorded. In three out of four sessions
the error rate increased as the number of electrodes
was reduced, but in one session, it increased from
14.8% up to 19.8% (S1, VFRB). The minimum error
rate increased in three sessions and stayed constant
in one of them (S1, bFB).
Table 1: Results from the six sessions. The first number in
each cell shows the mean error rate beginning from 3.5
seconds until 8 seconds. The number in parenthesis shows
the minimum error rate within this time.
bFB
VRFB
Subject
27ch
63ch
27ch
63ch
S1
12.8 (2.5)
12.75 (2.5)
14.8 (5)
19.8 (4.5)
S2
20.8 (11.25) 19.9 (5.0)
25 (12.5)
19.2 (5.9)
S3 25.0 (8.0) 21.8 (10.0)
mean
19.5 (7.25)
16.3 (3.75)
20.5 (10.1)
19.5 (5.2)
Fig. 5 shows an example of the error rate from
S1 during the two sessions that used all 63 channels
for classification. The black line at three seconds
indicates the onset of the cue. The error rate before
the cue is about 50 percent and then drops below ten
percent for both sessions. It stays below ten percent
from second 5.5 until the end of the trial, showing
the good control the subject had.
RehabilitationthroughBrainComputerInterfaces-ClassificationandFeedbackStudy
695
Figure 5: Error rate from the two feedback runs for S1.
The vertical bar indicates the cue onset.
Table 2 summarizes the accuracy results of the
seven subjects using all 63 channels. The average
and min error rate has been calculated in the same
way as the Table 1. The results show a significant
performance variance between subjects. In three out
of seven subjects, the error rate increased with the
VRFB, but overall, the bFB yielded worse results
compared to the virtual reality (S1, S2, S4 and S6).
Better results are under 5% error in 3 subjects (S2,
S6 and S7).
Table 2: Accuracy of 7 subjects using the 63 channel
system. The first number shows the mean error rate, and
the second number shows the minimum error rate. The
accuracy has been calculated using data trials beginning
from 3.5 seconds until 8 seconds.
bFB VRFB
Subject Mean Err. Min. Err. Mean Err. Min Err.
S1 42.30% 33.80% 37.30% 31.30%
S2 5.50% 0% 3.20% 0%
S3 35.50% 20% 37% 25%
S4 45.70% 37.50% 30.70% 25%
S5 5.20% 2.50% 14.10% 5%
S6 17% 11.30% 5% 1.30%
S7 3.90% 1.30% 4.60% 0%
mean 22.16% 15.20% 18.84% 12.51%
4 CONCLUSIONS
This study compared two different feedback
strategies for performing MI for stroke
rehabilitation. The VRFB provided realistic
feedback that was similar to the imagined
movements. Hence, we expected this strategy would
lead to better classification. This hypothesis was not
consistent with the results. In fact, performance was
slightly worse with the VRFB in comparison to the
bFB sessions. After the sessions, subjects said that it
was quite disturbing when the classifier did a
misclassification and hence the “wrong” hand
moved during the VRFB session. We propose that
this mismatch between expected and actual feedback
was primarily responsible for both this cognitive
dissonance and worse performance. In future
studies, we will test to give only feedback when the
correct hand is classified.
The BCI performs better using 63 EEG-channels
instead of 27. This result should encourage the use
of larger montages when practical. Furthermore, the
comparison of the spatial patterns shows that
electrodes that are mounted over the motor cortex
and near C3 and C4 (which are present in the 63 and
27 channels configurations) are the most important.
Also, positions that are not part of the 27 channel-
configuration play an important role for
classification.
The results we obtained with 64 electrodes
encourage us to test 128 EEG-channel montages in
future work. Also, the current study shows results
achieved by healthy users only. A future goal will be
to utilize the lessons learned here for rehabilitation
of patients suffering stroke.
ACKNOWLEDGEMENTS
The authors gratefully acknowledge the funding by
the European Commission under Better and
BrainAble projects.
REFERENCES
Ang, K. K., Guan, C., Chua, K. S. G., Ang, B. T., Kuah,
C., Wang, C., 2009. A clinical study of motor imagery-
based brain-computer interface for upper limb robotic
rehabilitation. Conf Proc IEEE Eng Med Biol Soc.,
vol. 2009, pp. 5981-4.
Blankertz, B., Tomioka, R., Lemm, S., Kawanabe, M. and
Müller, K.-R., 2008. Optimizing Spatial Filters for
IJCCI2012-InternationalJointConferenceonComputationalIntelligence
696
Robust EEG Single-Trial Analysis. IEEE Signal
Processing Magazine, vol. 41.
Friman, O., Volosyak, I. and Gräser, A., 2007. Multiple
channel detection of steady-state visual evoked
potentials for brain-computer interfaces. IEEE Trans.
Biomed. Engng., vol. 54, pp. 742-750.
Guger, C., Edlinger, G., Harkam, W., Niedermayer, I. and
Pfurtscheller, G., 2003. How many people are able to
operate an EEG-based brain-computer interface.
IEEE Trans. Neural Systems and Rehab. Engng., vol.
11, pp. 145-147.
Guger, C., Daban, S., Sellers, E., Holzner, C., Krausz, G.,
Carabalona, R., Gramatica, F. and Edlinger, G., 2009.
How many people are able to control a P300-based
brain-computer interface (BCI)?. Neuroscience
Letters, 462(1), pp. 94-98.
Grosse-Wentrup, M., Mattia, D., and Oweiss, K., 2011.
Using brain–computer interfaces to induce neural
plasticity and restore function. J Neural Eng., vol. 8,
025004.
Koles, Z., 1991. The quantitative extraction and
topographic mapping of the abnormal components in
the clinical EEG. Electroencephalogr. Clin.
Neurophysiol., vol. 79(6), pp. 440-447.
Mueller-Gerking, J., Pfurtscheller, G., and Flyvbjerg, H.,
1999. Designing optimal spatial filters for single-trial
EEG classification in a movement task. Clin
Neurophysiol, vol. 110, pp. 787-798.
Neuper, C., Scherer, R., Wriessnegger, S. and
Pfurtscheller, G., 2009. Motor imagery and action
observation: modulation of sensorimotor brain
rhythms during mental control of a brain-computer
interface, Clin Neurophysiol., vol. 120(2), pp. 239-47.
Neuper, C. and Pfurtscheller, G., 2010. “Neurofeedback
Training for BCI Control”, in Brain–Computer
Interfaces, Revolutionizing Human-Computer
Interaction, Graimann, B., Allison, B. Z. and
Pfurtscheller, G., Eds. Berlin-Heidelberg: Springer,
pp. 65-78.
Shindo, K., Kawashima, K., Ushiba, J., Ota, N., Ito, M.,
Ota, T., 2011. Effects of neurofeedback training with
an electroencephalogram-based brain-computer
interface for hand paralysis in patients with chronic
stroke: a preliminary case series study. J Rehabil
Med, vol. 43(10), pp. 951-957.
Townsend, G., LaPallo, B. K., Boulay, C. B., Krusienski,
D. J., Frye, G. E., Hauser, C.K., Schwartz, N. E.,
Vaughan, T. M., Wolpaw, J. R., Sellers, E. W.. Clin
Neurophysiol. Vol. Jul;121(7):1109-20. Epub 2010
Mar 26.
RehabilitationthroughBrainComputerInterfaces-ClassificationandFeedbackStudy
697