AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head

Authors

  • Rongjie Huang Zhejiang University
  • Mingze Li Zhejiang University
  • Dongchao Yang Peking University
  • Jiatong Shi Carnegie Mellon University
  • Xuankai Chang Carnegie Mellon University
  • Zhenhui Ye Zhejiang University
  • Yuning Wu Remin University of China
  • Zhiqing Hong Zhejiang University
  • Jiawei Huang Zhejiang University
  • Jinglin Liu Zhejiang University
  • Yi Ren Zhejiang University
  • Yuexian Zou Peking University
  • Zhou Zhao Zhejiang University
  • Shinji Watanabe Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v38i21.30570

Keywords:

Artificial Intelligence, Natural language processing and speech recognition, Human-AI interaction (including Human-robot interaction)

Abstract

Large language models (LLMs) have exhibited remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. Despite the recent success, current LLMs are not capable of processing complex audio information or conducting spoken conversations (like Siri or Alexa). In this work, we propose a multi-modal AI system named AudioGPT, which complements LLMs (i.e., ChatGPT) with 1) foundation models to process complex audio information and solve numerous understanding and generation tasks; and 2) the input/output interface (ASR, TTS) to support spoken dialogue. With an increasing demand to evaluate multi-modal LLMs of human intention understanding and cooperation with foundation models, we outline the principles and processes and test AudioGPT in terms of consistency, capability, and robustness. Experimental results demonstrate the capabilities of AudioGPT in solving 16 AI tasks with speech, music, sound, and talking head understanding and generation in multi-round dialogues, which empower humans to create rich and diverse audio content with unprecedented ease. Code can be found in https://github.com/AIGC-Audio/AudioGPT

Downloads

Published

2024-03-24

How to Cite

Huang, R., Li, M., Yang, D., Shi, J., Chang, X., Ye, Z., Wu, Y., Hong, Z., Huang, J., Liu, J., Ren, Y., Zou, Y., Zhao, Z., & Watanabe, S. (2024). AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23802-23804. https://doi.org/10.1609/aaai.v38i21.30570