Turkish Journal of Electrical Engineering and Computer Sciences
Affective states classification performance of audio-visual stimuli from EEG signals with multiple-instance learning
Throughout various disciplines, emotion recognition continues to be an essential subject of study. With the advancement of machine learning methods, accurate emotion recognition from different data modalities (facial images, brain EEG signals) has become possible. Success of EEG-based emotion recognition systems depends on efficient feature extraction and pre/postprocessing of signals. Main objective of this study is to analyze the efficacy of multiple-instance learning (MIL) on postprocessing features of EEG signals using three different domains (time, frequency, time-frequency) for human emotion classification. Methods and results are presented for single-trial classification of valence (V), arousal (A), and dominance (D) ratings from EEG signals obtained with audio (A), video (V), and audio-video (AV) stimulus using alpha, beta and gamma bands. High accuracy was observed with both binary and multiclass classification of the AV stimulus. Findings in this study suggest that MIL applied on frequency features yields efficient results on EEG emotion recognition.
Emotion recognition, EEG, multiple-instance learning, time domain, frequency domain, time-frequency domain
DAŞDEMİR, YAŞAR and ÖZAKAR, RÜSTEM
"Affective states classification performance of audio-visual stimuli from EEG signals with multiple-instance learning,"
Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 30:
7, Article 15.
Available at: https://journals.tubitak.gov.tr/elektrik/vol30/iss7/15
Computer Engineering Commons, Computer Sciences Commons, Electrical and Computer Engineering Commons