LLM Collaboration with Multi-Agent Reinforcement Learning

Authors

  • Shuo Liu Northeastern University, Boston, MA
  • Zeyu Liang Northeastern University, Boston, MA
  • Xueguang Lyu Northeastern University, Boston, MA
  • Christopher Amato Northeastern University, Boston, MA

DOI:

https://doi.org/10.1609/aaai.v40i38.40487

Abstract

A large amount of work has been done in Multi-Agent Systems (MAS) for modeling and solving problems with multiple interacting agents. However, most LLMs are pretrained independently and not specifically optimized for coordination. Existing LLM fine-tuning frameworks rely on individual rewards, which require complex reward designs for each agent to encourage collaboration. To address these challenges, we model LLM collaboration as a cooperative Multi-Agent Reinforcement Learning (MARL) problem. We develop a multi-agent, multi-turn algorithm, Multi-Agent Group Relative Policy Optimization (MAGRPO), to solve it, building on current RL approaches for LLMs as well as MARL techniques. Our experiments on LLM writing and coding collaboration demonstrate that fine-tuning MAS with MAGRPO enables agents to generate high-quality responses efficiently through effective cooperation. Our approach opens the door to using MARL methods for LLM collaboration and highlights the associated challenges.

Published

2026-03-14

How to Cite

Liu, S., Liang, Z., Lyu, X., & Amato, C. (2026). LLM Collaboration with Multi-Agent Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 32150-32158. https://doi.org/10.1609/aaai.v40i38.40487

Issue

Section

AAAI Technical Track on Natural Language Processing III