Similar single-trial P300 classification perfor-
mance has been commonly reported in the litera-
ture. For example, in (Haghighatpanah et al., 2013),
65 % single-trial accuracy was achieved (using one to
three EEG channels and personalized training data).
In (Sharma, 2017), 40 % to 66 % classification accu-
racy was reported, highly dependent on the individ-
ual tested. This paper presents comparable classifica-
tion accuracy that was achieved using a multi-subject
dataset. Therefore, time-consuming training data col-
lection for each new user might be avoided, and long
training times of deep neural networks no longer pose
a problem. However, despite being successful, this
paper did not confirm their benefits over traditional
methods.
Even though LSTM did not outperform other clas-
sifiers in the presented P300 experiments, it could be-
come valuable as a layer in a more complex model.
For example, in (Ditthapron et al., 2019), an LSTM
layer has been used in a multi-task autoencoder. First,
CNN layers were used to capture spatial domain fea-
tures, and LSTM was used for temporal relationship.
The resulting latent vector was either used to recon-
struct the input or for the P300 classification. A simi-
lar approach could be applied in future work.
This study has several limitations. First, classifi-
cation results on school-age children outside the labo-
ratory environment may not be generalized to a more
typical BCI population. Moreover, despite careful
manual tuning of hyperparameters, there might be an-
other RNN architecture outperforming the presented
CNN architecture that has not been discovered.
6 CONCLUSION
The presented experiments demonstrated that suc-
cessful P300 detection is possible for a multi-subject
dataset with all presented models (LDA, CNN, RNN).
However, when directly comparing CNN and RNN,
CNN appeared superior. It yielded comparable classi-
fication accuracy, more stable results, and was easier
to configure. The presented offline experiments can
be further reproduced in an online BCI. More exper-
iments into stacking CNN and RNN layers could be
the aim of future work.
ACKNOWLEDGEMENTS
This work was supported by the University specific
research project SGS-2019-018 Processing of hetero-
geneous data and its specialized applications. Special
thanks go to Master’s students Patrik Harag and Mar-
tin Matas for their initial experiments that inspired
this work.
REFERENCES
Blankertz, B., Lemm, S., Treder, M., Haufe, S., and Muller,
K. (2011). Single-trial analysis and classification of
ERP components - a tutorial. NeuroImage, 56(2):814–
825.
Chollet, F. et al. (2015). Keras. https://keras.io.
Clevert, D.-A., Unterthiner, T., and Hochreiter, S. (2016).
Fast and accurate deep network learning by exponen-
tial linear units (ELUs). CoRR, abs/1511.07289.
Deng, L. and Yu, D. (2014). Deep learning: Methods
and applications. Foundations and Trends
R
in Sig-
nal Processing, 7(3–4):197–387.
Ditthapron, A., Banluesombatkul, N., Ketrat, S., Chuang-
suwanich, E., and Wilaiprasitporn, T. (2019). Uni-
versal joint feature extraction for p300 eeg classifi-
cation using multi-task autoencoder. IEEE Access,
7:68415–68428.
Farwell, L. A. and Donchin, E. (1988). Talking off the top of
your head: Toward a mental prosthesis utilizing event-
related brain potentials. Electroencephalography and
Clinical Neurophysiology, 70:510–523.
Haghighatpanah, N., Amirfattahi, R., Abootalebi, V., and
Nazari, B.(2013). A single channel-single trial P300
detection algorithm. In 2013 21st Iranian Conference
on Electrical Engineering (ICEE),pages 1–5.
Kingma, D. P. and Ba, J. (2014). Adam: A
method for stochastic optimization. cite
arxiv:1412.6980Comment: Published as a con-
ference paper at the 3rd International Conf. for
Learning Representations, San Diego, 2015.
Ledoit, O. and Wolf, M. (2004). Honey, i shrunk the sample
covariance matrix. The Journal of Portfolio Manage-
ment, 30(4):110–119.
Lotte, F., Bougrain, L., Cichocki, A., Clerc, M., Con-
gedo, M., Rakotomamonjy, A., and Yger, F. (2018).
A review of classification algorithms for EEG-based
brain–computer interfaces: a 10 year update. Journal
of Neural Engineering, 15(3):031005.
Luck, S. J. (2005). An introduction to the event-related po-
tential technique. MIT Press, Cambridge, MA.
Manyakov, N. V., Chumerin, N., Combaz, A., and
Van Hulle, M. M. (2011). Comparison of classi-
fication methods for P300 brain-computer interface
on disabled subjects. Intell. Neuroscience, 2011:2:1–
2:12.
McFarland, D. J. and Wolpaw, J. R. (2011). Brain-computer
interfaces for communication and control. Commun.
ACM, 54(5):60–66.
Mou
ˇ
cek, R., Va
ˇ
reka, L., Prokop, T.,
ˇ
St
ˇ
ebet
´
ak, J., and Br
˚
uha,
P. (2017). Event-related potential data from a guess
the number brain-computer interface experiment on
school children. Scientific Data, 4.
BIOSIGNALS 2021 - 14th International Conference on Bio-inspired Systems and Signal Processing
190