Towards Better Code Understanding in Decoder-Only Models with Contrastive Learning

Authors

  • Jiayi Lin International Digital Economy Academy Tencent
  • Yanlin Wang Sun Yat-Sen University
  • Yibiao Yang Nanjing University
  • Lei Zhang International Digital Economy Academy
  • Yutao Xie International Digital Economy Academy

DOI:

https://doi.org/10.1609/aaai.v40i38.40471

Abstract

Recent advances in large-scale code generation models have led to remarkable progress in producing high-quality code. These models are trained in a self-supervised manner on extensive unlabeled code corpora using a decoder-only architecture. However, despite their generative strength, decoder-only models often exhibit limited performance on code understanding tasks such as code search and clone detection, primarily due to their generation-oriented training objectives. While training large encoder-only models from scratch on massive code datasets can improve understanding ability but remains computationally expensive and time-consuming. In this paper, we explore a more efficient alternative by transferring knowledge from pre-trained decoder-only code generation models to code understanding tasks. We investigate how decoder-only architectures can be effectively adapted to learn discriminative and semantically meaningful code representations. To this end, we propose CL4D, a contrastive learning framework tailored to strengthen the representation capabilities of decoder-only models. Extensive experiments on multiple benchmark datasets demonstrate that CL4D achieves competitive or superior performance compared to existing methods on representative code understanding tasks, including code search and clone detection. Further analysis reveals that CL4D substantially improves the semantic alignment of code representations by reducing the distance between semantically similar code snippets. These findings highlight the feasibility of leveraging decoder-only models as a unified backbone for both code generation and understanding.

Downloads

Published

2026-03-14

How to Cite

Lin, J., Wang, Y., Yang, Y., Zhang, L., & Xie, Y. (2026). Towards Better Code Understanding in Decoder-Only Models with Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 32006–32014. https://doi.org/10.1609/aaai.v40i38.40471

Issue

Section

AAAI Technical Track on Natural Language Processing III