Synch-Graph: Multisensory Emotion Recognition Through Neural Synchrony via Graph Convolutional Networks

Authors

  • Esma Mansouri-Benssassi University of St Andrews
  • Juan Ye University of St Andrews

DOI:

https://doi.org/10.1609/aaai.v34i02.5491

Abstract

Human emotions are essentially multisensory, where emotional states are conveyed through multiple modalities such as facial expression, body language, and non-verbal and verbal signals. Therefore having multimodal or multisensory learning is crucial for recognising emotions and interpreting social signals. Existing multisensory emotion recognition approaches focus on extracting features on each modality, while ignoring the importance of constant interaction and co-learning between modalities. In this paper, we present a novel bio-inspired approach based on neural synchrony in audio-visual multisensory integration in the brain, named Synch-Graph. We model multisensory interaction using spiking neural networks (SNN) and explore the use of Graph Convolutional Networks (GCN) to represent and learn neural synchrony patterns. We hypothesise that modelling interactions between modalities will improve the accuracy of emotion recognition. We have evaluated Synch-Graph on two state-of-the-art datasets and achieved an overall accuracy of 98.3% and 96.82%, which are significantly higher than the existing techniques.

Downloads

Published

2020-04-03

How to Cite

Mansouri-Benssassi, E., & Ye, J. (2020). Synch-Graph: Multisensory Emotion Recognition Through Neural Synchrony via Graph Convolutional Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(02), 1351-1358. https://doi.org/10.1609/aaai.v34i02.5491

Issue

Section

AAAI Technical Track: Cognitive Systems