Inferring Emotion from Large-scale Internet Voice Data: A Semi-supervised Curriculum Augmentation based Deep Learning Approach

Authors

  • Suping Zhou Tsinghua University
  • Jia Jia Tsinghua University
  • Zhiyong Wu Tsinghua University
  • Zhihan Yang Tsinghua University
  • Yanfeng Wang Sogou Corporation, Beijing, China
  • Wei Chen Sogou Corporation, Beijing, China
  • Fanbo Meng Sogou Corporation, Beijing, China
  • Shuo Huang Tsinghua University
  • Jialie Shen Queen's University, Belfast, U.K
  • Xiaochuan Wang Sogou Corporation, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v35i7.16753

Keywords:

Emotional Intelligence, Speech & Signal Processing, Applications

Abstract

Effective emotion inference from user queries helps to give a more personified response for Voice Dialogue Applications(VDAs). The tremendous amounts of VDA users bring in diverse emotion expressions. How to achieve a high emotion inferring performance from large-scale Internet Voice Data in VDAs? Traditionally, researches on speech emotion recognition are based on acted voice datasets, which have limited speakers but strong and clear emotion expressions. Inspired by this, in this paper, we propose a novel approach to leverage acted voice data with strong emotion expressions to enhance large-scale unlabeled internet voice data with diverse emotion expressions for emotion inferring. Specifically, we propose a novel semi-supervised multi-modal curriculum augmentation deep learning framework. First, to learn more general emotion cues, we adopt a curriculum learning based epoch-wise training strategy, which trains our model guided by strong and balanced emotion samples from acted voice data and sub-sequently leverages weak and unbalanced emotion samples from internet voice data.Second, to employ more diverse emotion expressions, we design a Multi-path Mix-match Multimodal Deep Neural Network(MMMD), which effectively learns feature representations for multiple modalities and trains labeled and unlabeled data in hybrid semi-supervised methods for superior generalization and robustness. Experiments on an internet voice dataset with 500,000 utterances show our method outperforms (+10.09% in terms of F1) several alternative baselines, while an acted corpus with 2,397 utterances contributes 4.35%. To further compare our method with state-of-the-art techniques in traditionally acted voice datasets, we also conduct experiments on public dataset IEMOCAP. The results reveal the effectiveness of the proposed approach.

Downloads

Published

2021-05-18

How to Cite

Zhou, S., Jia, J., Wu, Z., Yang, Z., Wang, Y., Chen, W., Meng, F., Huang, S., Shen, J., & Wang, X. (2021). Inferring Emotion from Large-scale Internet Voice Data: A Semi-supervised Curriculum Augmentation based Deep Learning Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 6039-6047. https://doi.org/10.1609/aaai.v35i7.16753

Issue

Section

AAAI Technical Track on Humans and AI