TY - JOUR AU - Wang, Kai AU - Xu, Lily AU - Perrault, Andrew AU - Reiter, Michael K. AU - Tambe, Milind PY - 2022/06/28 Y2 - 2024/03/28 TI - Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 5 SE - AAAI Technical Track on Game Theory and Economic Paradigms DO - 10.1609/aaai.v36i5.20457 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20457 SP - 5219-5227 AB - A growing body of work in game theory extends the traditional Stackelberg game to settings with one leader and multiple followers who play a Nash equilibrium. Standard approaches for computing equilibria in these games reformulate the followers' best response as constraints in the leader's optimization problem. These reformulation approaches can sometimes be effective, but make limiting assumptions on the followers' objectives and the equilibrium reached by followers, e.g., uniqueness, optimism, or pessimism. To overcome these limitations, we run gradient descent to update the leader's strategy by differentiating through the equilibrium reached by followers. Our approach generalizes to any stochastic equilibrium selection procedure that chooses from multiple equilibria, where we compute the stochastic gradient by back-propagating through a sampled Nash equilibrium using the solution to a partial differential equation to establish the unbiasedness of the stochastic gradient. Using the unbiased gradient estimate, we implement the gradient-based approach to solve three Stackelberg problems with multiple followers. Our approach consistently outperforms existing baselines to achieve higher utility for the leader. ER -