Learning to Compress: Unlocking the Potential of Large Language Models for Text Representation

Authors

  • Yeqin Zhang Nanjing University
  • Yizheng Zhao Nanjing University
  • Chen Hu StepFun
  • Binxing Jiao StepFun
  • Daxin Jiang StepFun
  • Ruihang Miao StepFun
  • Cam-Tu Nguyen Nanjing University

DOI:

https://doi.org/10.1609/aaai.v40i34.40075

Abstract

Text representation plays a critical role in tasks like clustering, retrieval, and other downstream applications. With the emergence of large language models (LLMs), there is increasing interest in harnessing their capabilities for this purpose. However, most of the LLMs are inherently causal and optimized for next-token prediction, making them suboptimal for producing holistic representations. To address this, recent studies introduced pretext tasks to adapt LLMs for text representation. Most of these tasks, however, rely on token-level prediction objectives, such as the masked next-token prediction (MNTP) used in LLM2Vec. In this work, we explore the untapped potential of context compression as a pretext task for unsupervised adaptation of LLMs. During compression pre-training, the model learns to generate compact memory tokens, which substitute the whole context for downstream sequence prediction. Experiments demonstrate that a well-designed compression objective can significantly enhance LLM-based text representations, outperforming models trained with token-level pretext tasks. Further improvements through contrastive learning produce a strong representation model (LLM2Comp) that outperforms contemporary LLM-based text encoders on a wide range of tasks while being more sample-efficient, requiring significantly less training data.

Published

2026-03-14

How to Cite

Zhang, Y., Zhao, Y., Hu, C., Jiao, B., Jiang, D., Miao, R., & Nguyen, C.-T. (2026). Learning to Compress: Unlocking the Potential of Large Language Models for Text Representation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(34), 28456–28464. https://doi.org/10.1609/aaai.v40i34.40075

Issue

Section

AAAI Technical Track on Machine Learning XI