Cooperative Multi-Agent Fairness and Equivariant Policies

Authors

  • Niko A. Grupen Cornell University
  • Bart Selman Cornell University
  • Daniel D. Lee Cornell Tech

DOI:

https://doi.org/10.1609/aaai.v36i9.21166

Keywords:

Multiagent Systems (MAS)

Abstract

We study fairness through the lens of cooperative multi-agent learning. Our work is motivated by empirical evidence that naive maximization of team reward yields unfair outcomes for individual team members. To address fairness in multi-agent contexts, we introduce team fairness, a group-based fairness measure for multi-agent learning. We then prove that it is possible to enforce team fairness during policy optimization by transforming the team's joint policy into an equivariant map. We refer to our multi-agent learning strategy as Fairness through Equivariance (Fair-E) and demonstrate its effectiveness empirically. We then introduce Fairness through Equivariance Regularization (Fair-ER) as a soft-constraint version of Fair-E and show that it reaches higher levels of utility than Fair-E and fairer outcomes than non-equivariant policies. Finally, we present novel findings regarding the fairness-utility trade-off in multi-agent settings; showing that the magnitude of the trade-off is dependent on agent skill.

Downloads

Published

2022-06-28

How to Cite

Grupen, N. A., Selman, B., & Lee, D. D. (2022). Cooperative Multi-Agent Fairness and Equivariant Policies. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9350-9359. https://doi.org/10.1609/aaai.v36i9.21166

Issue

Section

AAAI Technical Track on Multiagent Systems