SECodec: Structural Entropy-based Compressive Speech Representation Codec for Speech Language Models

Authors

  • Linqin Wang Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
  • Yaping Liu Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
  • Zhengtao Yu Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
  • Shengxiang Gao Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
  • Cunli Mao Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
  • Yuxin Huang Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
  • Wenjun Wang Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
  • Ling Dong Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China Yunnan Key Laboratory of Artificial Intelligence, Kunming, China

DOI:

https://doi.org/10.1609/aaai.v39i24.34726

Abstract

With the rapid advancement of large language models (LLMs), discrete speech representations have become crucial for integrating speech into LLMs. Existing methods for speech representation discretization rely on a predefined codebook size and Euclidean distance-based quantization. However, 1) the size of codebook is a critical parameter that affects both codec performance and downstream task training efficiency. 2) The Euclidean distance-based quantization may lead to audio distortion when the size of the codebook is controlled within a reasonable range. In fact, in the field of information compression, structural information and entropy guidance are crucial, but previous methods have largely overlooked these factors. Therefore, we address the above issues from an information-theoretic perspective, we present SECodec, a novel speech representation codec based on structural entropy (SE) for building speech language models. Specifically, we first model speech as a graph, clustering the speech features nodes within the graph and extracting the corresponding codebook by hierarchically and disentangledly minimizing 2D SE. Then, to address the issue of audio distortion, we propose a new quantization method. This method still adheres to the 2D SE minimization principle, adaptively selecting the most suitable token corresponding to the cluster for each incoming original speech node. Furthermore, we develop a Structural Entropy-based Speech Language Model (SESLM) that leverages SECodec. Experimental results demonstrate that SECodec performs comparably to EnCodec in speech reconstruction, and SESLM surpasses VALL-E in zero-shot text-to-speech tasks.

Downloads

Published

2025-04-11

How to Cite

Wang, L., Liu, Y., Yu, Z., Gao, S., Mao, C., Huang, Y., … Dong, L. (2025). SECodec: Structural Entropy-based Compressive Speech Representation Codec for Speech Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 25380–25388. https://doi.org/10.1609/aaai.v39i24.34726

Issue

Section

AAAI Technical Track on Natural Language Processing III