Discovering Decoupled Functional Modules in Large Language Models
DOI:
https://doi.org/10.1609/aaai.v40i41.40749Abstract
Understanding the internal functional organization of Large Language Models (LLMs) is crucial for improving their trustworthiness and performance. However, how LLMs organize different functions into modules remains highly unexplored. To bridge this gap, we formulate a function module discovery problem and propose an Unsupervised LLM Cross-layer MOdule Discovery (ULCMOD) framework that simultaneously disentangles the large set of neurons in the entire LLM into modules while discovering the topics of input samples related to these modules. Our framework introduces a novel objective function and an efficient Iterative Decoupling (IterD) algorithm. Extensive experiments show that our method discovers high-quality, disentangled modules that capture more meaningful semantic information and achieve superior performance in various downstream tasks. Moreover, our qualitative analysis reveals that the discovered modules show function comprehensiveness, function hierarchy, and clear function spatial arrangement within LLMs. Our work provides a novel tool for interpreting LLMs' function modules, filling a critical gap in LLMs' interpretability research.Downloads
Published
2026-03-14
How to Cite
Yu, Y., Li, J., Sun, Y., Li, P., Wang, Z., & Zheng, Y. (2026). Discovering Decoupled Functional Modules in Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(41), 34503–34511. https://doi.org/10.1609/aaai.v40i41.40749
Issue
Section
AAAI Technical Track on Natural Language Processing VI