5 CONCLUSIONS
A real-time sonification system was used to design
flexible, multi-parametric EEG data sonifications,
that were later adjusted in real time by end-users to
personalize the mappings. Four settings varying in
feedback richness and personalization were evaluated
using 15-min a/t neurofeedback training and between-
subjects design. Comparing both subjective and phys-
iological data from pre- and post-training showed sig-
nificant relaxation in groups with personalized feed-
back, while not in groups training with fixed sonifica-
tion mappings. In addition, a larger number of soni-
fied EEG features resulted in higher t/a ratio increase.
These results demonstrate that both sonification rich-
ness (a number of EEG features to be displayed) and
end-user personalization play an important role in the
effectiveness of real-time EEG sonification for neuro-
feedback.
ACKNOWLEDGEMENTS
The last author of this paper received funding from
Marie Curie Actions of the European Union’s Seventh
Framework Programme (FP7/2007-2013) under REA
GA-303172.
REFERENCES
Allanson, J. and Fairclough, S. (2004). A research agenda
for physiological computing. Interacting with Com-
puters, 16(5):857–878.
Bradley, M. M. and Lang, P. J. (1994). Measuring emotion:
The self-assessment manikin and the semantic differ-
ential. Journal of Behavior Therapy and Experimental
Psychiatry, 25(1):49 – 59.
De Campo, A., Hoeldrich, R., Eckel, G., and Wallisch, A.
(2007). New Sonification Tools For Eeg Data Screen-
ing And Monitoring. In Proceedings of the 13th In-
ternational Conference on Auditory Display, volume
67(2009)90, pages 536–542.
Duvinage, M., Castermans, T., Petieau, M., Hoellinger, T.,
Cheron, G., and Dutoit, T. (2013). Performance of
the emotiv epoc headset for p300-based applications.
Biomed Eng Online, 12:56.
Egner, T., Strawson, E., and Gruzelier, J. (2002). Eeg sig-
nature and phenomenology of alpha/theta neurofeed-
back training versus mock feedback. Applied Psy-
chophysiology and Biofeedback, 27(4):261–270.
Emotiv (2013). Emotiv epoc. http://www.emotiv.com.
Gruzelier, J. (2009). A theory of alpha/theta neurofeed-
back, creative performance enhancement, long dis-
tance functional connectivity and psychological inte-
gration. Cognitive Processing, 10(1):101–109.
Guttman, S. E., Gilroy, L. A., and Blake, R. (2005). Hearing
what the eyes see: Auditory encoding of visual tempo-
ral sequences. Psychological Science, 16(3):228–235.
Hermann, T., Meinicke, P., and Bekel, H. (2002). Sonifi-
cations for EEG data analysis. In Proceedings of the
International Conference on Auditory Display (ICAD
2002), pages 3–7, Kyoto.
Hjorth, B. (1970). Eeg analysis based on time domain prop-
erties. Electroencephalography and clinical neuro-
physiology, 29(3):306–310.
Khamis, H., Mohamed, A., Simpson, S., and McEwan, A.
(2012). Detection of temporal lobe seizures and iden-
tification of lateralisation from audified EEG. Clinical
Neurophysiology, 123(9):1714–20.
Kropotov, J. (2010). Quantitative EEG, event-related po-
tentials and neurotherapy. Elsevier.
Liang, S.-F., Chen, Y.-C., Wang, Y.-L., Chen, P.-T., Yang,
C.-H., and Chiueh, H. (2013). A hierarchical ap-
proach for online temporal lobe seizure detection in
long-term intracranial eeg recordings. Journal of neu-
ral engineering, 10(4):045004.
Mealla, S., V
¨
aljam
¨
ae, A., Bosi, M., and Jord
`
a, S. (2011).
Listening to your brain: Implicit interaction in collab-
orative music performances. In Proc. of NIME, pages
149–154.
Mullen, T., Luther, M., Way, K., and Jansch, A. (2011).
Minding the (Transatlantic) Gap: An Internet-Enabled
Acoustic Brain-Computer Music Interface followed
throughout the next decade by a number of artists. In
Proc. NIME’11.
Neuroelectrics (2013). Neuroelectrics enobio.
http://neuroelectrics.com/enobio.
Puckette, M. et al. (1996). Pure data: another integrated
computer music environment. Proceedings of the Sec-
ond Intercollege Computer Music Concerts, pages 37–
41.
Rodr
´
ıguez, A., Rey, B., and Alca
˜
niz, M. (2013). Validation
of a low-cost eeg device for mood induction studies.
Studies in health technology and informatics, 191:43–
47.
Shinn-Cunningham, B. G. (2008). Object-based auditory
and visual attention. Trends Cogn Sci, 12(5):182–186.
Tajadura-Jimenez, A., Valjamae, A., and Vastfjall, D.
(2008). Self-representation in mediated environments:
the experience of emotions modulated by auditory-
vibrotactile heartbeat. Cyberpsychology and Behav-
ior, 11(1):33–38.
V
¨
aljam
¨
ae, A., Mealla, S., Steffert, T., Holland, S., Mari-
mon, X., Benitez, R., and et al. (2013). A Review
Of Real-time EEG Sonification Research. In The 19th
International Conference on Auditory Display (ICAD-
2013), Lodz, Poland.
Vastfjall, D. (2003). The subjective sense of presence, emo-
tion recognition, and experienced emotions in audi-
tory virtual environments. Cyberpsycholy and Behav-
ior, 6(2):181–188.
Wright, M. (2005). Open sound control: an enabling tech-
nology for musical networking. Organised Sound,
10(03):193–200.
PhyCS2014-InternationalConferenceonPhysiologicalComputingSystems
330