Authors:
S. Mealla
1
;
A. Oliveira
1
;
X. Marimon
2
;
T. Steffert
3
;
S. Jordà
1
and
A. Väljamäe
4
Affiliations:
1
Universitat Pompeu Fabra, Spain
;
2
Universitat Politecnica de Catalunya, Spain
;
3
The Open University, United Kingdom
;
4
St. Petersburg State University and Linköping University, Russian Federation
Keyword(s):
Sonification, EEG, Alpha/Theta Neurofeedback, Physiological Computing, Pure Data, Sound, Real Time.
Related
Ontology
Subjects/Areas/Topics:
Affective Computing
;
Applications
;
Biofeedback Technologies
;
Biosignal Acquisition, Analysis and Processing
;
Human-Computer Interaction
;
Interactive Physiological Systems
;
Methodologies and Methods
;
Pattern Recognition
;
Physiological Computing Systems
;
Physiology-Driven Computer Interaction
;
Software Engineering
Abstract:
The field of physiology-based interaction and monitoring is developing at a fast pace. Emerging applications
like fatigue monitoring often use sound to convey complex dynamics of biological signals and to provide an
alternative, non-visual information channel. Most Physiology-to-Sound mappings in such auditory displays
do not allow customization by the end-users. We designed a new sonification system that can be used for
extracting, processing and displaying Electroencephalography data (EEG) with different sonification strategies.
The system was validated with four user groups performing alpha/theta neurofeedback training (a/t) for
relaxation that varied in feedback personalization (Personalized/Fixed) and a number of sonified EEG features
(Single/Multiple). The groups with personalized feedback performed significantly better in their training
than fixed mappings groups, as shown by both subjective ratings and physiological indices. Additionally, the
higher number of sonified EEG fea
tures resulted in deeper relaxation than when training with single feature
feedback. Our results demonstrate the importance of adaptation and personaliziation of EEG sonification according
to particular applications, in our case, to a/t neurofeedback. Our experimental approach shows how
user performance can be used for validating different sonification strategies.
(More)