Abstract:
Electroencephalogram (EEG) signals, which capture high temporal resolution and asymmetric spatial activations, provide a reliable means for documenting emotions, unlike easily duplicated voice signals or facial expressions. However, due to the significant individual variability in emotional responses to the same stimuli, EEG signals are not universal and vary greatly among individuals. This variability necessitates subject-dependent emotion identification, which has shown promising results. Existing methods often struggle with accurately capturing the complex patterns in EEG data and generalizing across different individuals and conditions. This research proposes an advanced heterogeneous ensemble learning approach with a sophisticated voting mechanism to enhance the understanding of spatial asymmetry and temporal dynamics in EEG for more accurate and generalizable emotion recognition. The study employs Variational Mode Decomposition (VMD) and Empirical Mode Decomposition (EMD) for feature extraction from pre-processed EEG data. Feature selection is optimized using the Garra Rufa Fish Optimization Algorithm (GRFOA). The ensemble model integrates a Temporal Convolutional Network (TCNN), an Extreme Learning Machine (ELM), and a Multi-Layer Perceptron (MLP), with the final emotion classification derived via a heterogeneous voting classifier. The proposed approach is validated using two publicly available datasets, DEAP and MAHNOB-HCI, under extensive cross-validation settings, demonstrating its effectiveness and generalizability in emotion recognition from EEG signals.