Multimodal Fusion of EEG and Musical Features in Music-Emotion Recognition

Authors

  • Nattapong Thammasan Osaka University
  • Ken-ichi Fukui Osaka University
  • Masayuki Numao Osaka University

DOI:

https://doi.org/10.1609/aaai.v31i1.11112

Keywords:

emotion recognition, affective computing, brain-computer interface

Abstract

Multimodality has been recently exploited to overcome the challenges of emotion recognition. In this paper, we present a study of fusion of electroencephalogram (EEG) features and musical features extracted from musical stimuli at decision level in recognizing the time-varying binary classes of arousal and valence. Our empirical results demonstrate that EEG modality was suffered from the non-stability of EEG signals, yet fusing with music modality could alleviate the issue and enhance the performance of emotion recognition.

Downloads

Published

2017-02-12

How to Cite

Thammasan, N., Fukui, K.- ichi, & Numao, M. (2017). Multimodal Fusion of EEG and Musical Features in Music-Emotion Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11112