Turkish Journal of Electrical Engineering and Computer Sciences
DOI
10.55730/1300-0632.3964
Abstract
Throughout various disciplines, emotion recognition continues to be an essential subject of study. With the advancement of machine learning methods, accurate emotion recognition from different data modalities (facial images, brain EEG signals) has become possible. Success of EEG-based emotion recognition systems depends on efficient feature extraction and pre/postprocessing of signals. Main objective of this study is to analyze the efficacy of multiple-instance learning (MIL) on postprocessing features of EEG signals using three different domains (time, frequency, time-frequency) for human emotion classification. Methods and results are presented for single-trial classification of valence (V), arousal (A), and dominance (D) ratings from EEG signals obtained with audio (A), video (V), and audio-video (AV) stimulus using alpha, beta and gamma bands. High accuracy was observed with both binary and multiclass classification of the AV stimulus. Findings in this study suggest that MIL applied on frequency features yields efficient results on EEG emotion recognition.
Keywords
Emotion recognition, EEG, multiple-instance learning, time domain, frequency domain, time-frequency domain
First Page
2707
Last Page
2724
Recommended Citation
DAŞDEMİR, YAŞAR and ÖZAKAR, RÜSTEM
(2022)
"Affective states classification performance of audio-visual stimuli from EEG signals with multiple-instance learning,"
Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 30:
No.
7, Article 15.
https://doi.org/10.55730/1300-0632.3964
Available at:
https://journals.tubitak.gov.tr/elektrik/vol30/iss7/15
Included in
Computer Engineering Commons, Computer Sciences Commons, Electrical and Computer Engineering Commons