Large Language Models Are Read/Write Policy-Makers for Simultaneous Generation

Authors

  • Shoutao Guo Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) University of Chinese Academy of Sciences, Beijing, China
  • Shaolei Zhang Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) University of Chinese Academy of Sciences, Beijing, China
  • Zhengrui Ma Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) University of Chinese Academy of Sciences, Beijing, China
  • Yang Feng Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) University of Chinese Academy of Sciences, Beijing, China Key Laboratory of AI Safety, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v39i22.34570

Abstract

Simultaneous generation models write generation results while reading streaming inputs, necessitating a policy-maker to determine the appropriate output timing. Existing simultaneous generation methods generally adopt the traditional encoder-decoder architecture and learn the generation and policy-making capabilities through complex dynamic programming techniques. Although LLMs excel at text generation, they face challenges in taking on the role of policy-makers through traditional training methods, limiting their exploration in simultaneous generation. To overcome these limitations, we propose a novel LLM-driven Simultaneous Generation (LSG) framework, which allows the off-the-shelf LLM to decide the generation timing and produce output concurrently. Specifically, LSG selects the generation policy that minimizes latency as the baseline policy. Referring to the baseline policy, LSG enables the LLM to devise an improved generation policy that better balances latency and generation quality, and writes generation results accordingly. Experiments on simultaneous translation and streaming automatic speech recognition tasks show that our method can achieve state-of-the-art performance utilizing the open-source LLMs and demonstrate practicality in real-world scenarios.

Published

2025-04-11

How to Cite

Guo, S., Zhang, S., Ma, Z., & Feng, Y. (2025). Large Language Models Are Read/Write Policy-Makers for Simultaneous Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(22), 23969-23977. https://doi.org/10.1609/aaai.v39i22.34570

Issue

Section

AAAI Technical Track on Natural Language Processing I