If emotions can be estimated from brain waves,
this could lead to applications in neuromarketing,
such as product development based on consumers'
unconscious reactions, as well as in safe driving
assistance systems that monitor drivers' emotional
states to help prevent dangerous driving.
Although several studies have reported emotion
classification using EEG, many of them focused on
only 2 to 3 specific emotions (Alarcão & Fonseca,
2019). However, Plutchik previously reported that
human emotions can be expressed through a
combination of 8 basic emotions (Plutchik et al.,
1980). Thus, existing research using brainwave-based
emotion classification has not fully covered this
diverse range of emotions. This study aims to classify
four types of emotional states based on Plutchik's
eight basic emotions.
2 RELATED WORK
There are several challenges in using EEG for
emotion estimation. These include the fact that EEG
is a time-series signal with extensive information,
exhibits significant inter-individual variability, and
lacks clear patterns associated with specific
emotional states. Previous studies have attempted to
capture and classify emotional characteristics in EEG
through signal processing techniques. Common
approaches have relied on power spectral analysis and
electrode correlation information. More recently,
machine learning-based methods have been explored
to automatically extract previously unknown
emotion-related features from EEG signals.
Zheng proposed a method to estimate three
emotional states—positive, negative, and neutral—
using the GSCCA method, which identifies
correlations between electrodes and EEG frequencies
(Zheng, 2017). Li et al. proposed an estimation
method for quantifying happiness and sadness using
CSP and LinearSVM (Li & and Lu, 2009). In their
study, participants were shown facial images
representing specific emotions, and their EEG was
recorded. Emotion estimation with this classifier
achieved an average test accuracy of 93.5%. Saha
reports a method using CNN for emotion estimation
(Saha et al., 2022). However, because DNN-based
emotion estimation algorithms function as black
boxes, they cannot reliably estimate emotion based on
brain activity features identified in EEG and fMRI
studies. This limitation reduces the reliability of
EEG-based emotion estimation.
This study aims to analyze the relationship
between emotion and brain activity using EEG by
combining a signal processing method that visualizes
the correlation and direction of each frequency
between electrodes with a neural network (NN). In a
previous study, we proposed two method using
multidimensional directed coherence analysis to
visualize brain activity from EEG signals (Torii et al.,
2023; Torii et al., 2024). Multidimensional directed
coherence analysis tracks brain activity more
effectively than one-dimensional coherence and was
used to estimate joy, sadness, anger, and surprise.
With the exception of joy, multidimensional
coherence analysis achieved significantly higher
accuracy than one-dimensional coherence analysis,
which does not capture the multidimensional flow of
brain activity.
We extracted frequency and electrode
combinations that showed statistically significant
differences in the small/large relationship of
multidimensional directed coherence values across
emotions. These differences were used as rules for
emotion estimation, termed 'relative emotion rules,' as
detailed in Section 3.2. In the first method, each
extracted relative emotion rule was assigned equal
weight for emotion estimation. However, there are
varying levels of importance among these rules in
accurately estimating emotion.
We then explored a method to enhance accuracy
by focusing on relative emotion rules that are highly
effective for emotion estimation in the second
method. NNs are widely used in various fields to
provide optimal solutions by weighting data
appropriately. Previous research describes a method
for classifying the importance of relative emotion
rules across four emotions—joy, sadness, anger, and
surprise—using NNs.
This study also explored methods to classify four
emotional states—joy, sadness, anger, and surprise—
by combining multidimensional directed coherence
analysis, noise, and NNs. By incorporating noise, we
aimed to represent individual variability in EEG
signals, and by utilizing all values obtained from the
analysis—not just those used in the relative emotion
rule—we sought to extract features that, while not
statistically different, play a crucial role in accurate
emotion estimation.
The contributions of this study are as follows.
First, by combining multidimensional directed
coherence analysis with NNs, we achieved a higher
accuracy rate than previous methods in the test data.
Second, by analyzing the NN weights, we
demonstrated the potential to identify not only areas
with significant differences between emotions but
also subtle features that are important for emotion
classification. This approach, which leverages cost-