Understanding Individual Agent Importance in Multi-Agent System via Counterfactual Reasoning

Authors

  • Jianming Chen Institute of Software Chinese Academy of Sciences, Beijing, China Science & Technology on Integrated Information System Laboratory, Beijing, China State Key Laboratory of Intelligent Game, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Yawen Wang Institute of Software Chinese Academy of Sciences, Beijing, China Science & Technology on Integrated Information System Laboratory, Beijing, China State Key Laboratory of Intelligent Game, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Junjie Wang Institute of Software Chinese Academy of Sciences, Beijing, China Science & Technology on Integrated Information System Laboratory, Beijing, China State Key Laboratory of Intelligent Game, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Xiaofei Xie Singapore Management University, Singapore
  • Jun Hu Institute of Software Chinese Academy of Sciences, Beijing, China Science & Technology on Integrated Information System Laboratory, Beijing, China State Key Laboratory of Intelligent Game, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Qing Wang Institute of Software Chinese Academy of Sciences, Beijing, China Science & Technology on Integrated Information System Laboratory, Beijing, China State Key Laboratory of Intelligent Game, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Fanjiang Xu Institute of Software Chinese Academy of Sciences, Beijing, China Science & Technology on Integrated Information System Laboratory, Beijing, China State Key Laboratory of Intelligent Game, Beijing, China University of Chinese Academy of Sciences, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v39i15.33733

Abstract

Explaining multi-agent systems (MAS) is urgent as these systems become increasingly prevalent in various applications. Previous work has provided explanations for the actions or states of agents, yet falls short in understanding the blackboxed agent’s importance within a MAS and the overall team strategy. To bridge this gap, we propose EMAI, a novel agent-level explanation approach that evaluates the individual agent’s importance. Inspired by counterfactual reasoning, a larger change in reward caused by the randomized action of agent indicates its higher importance. We model it as a MARL problem to capture interactions across agents. Utilizing counterfactual reasoning, EMAI learns the masking agents to identify important agents. Specifically, we define the optimization function to minimize the reward difference before and after action randomization and introduce sparsity constraints to encourage the exploration of more action randomization of agents during training. The experimental results in seven multi-agent tasks demonstrate that EMAI achieves higher fidelity in explanations compared to baselines and provides more effective guidance in practical applications concerning understanding policies, launching attacks, and patching policies.

Downloads

Published

2025-04-11

How to Cite

Chen, J., Wang, Y., Wang, J., Xie, X., Hu, J., Wang, Q., & Xu, F. (2025). Understanding Individual Agent Importance in Multi-Agent System via Counterfactual Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(15), 15785–15794. https://doi.org/10.1609/aaai.v39i15.33733

Issue

Section

AAAI Technical Track on Machine Learning I