Newton Optimization on Helmholtz Decomposition for Continuous Games
Keywords:Multiagent Learning, Reinforcement Learning
AbstractMany learning problems involve multiple agents optimizing different interactive functions. In these problems, the standard policy gradient algorithms fail due to the non-stationarity of the setting and the different interests of each agent. In fact, algorithms must take into account the complex dynamics of these systems to guarantee rapid convergence towards a (local) Nash equilibrium. In this paper, we propose NOHD (Newton Optimization on Helmholtz Decomposition), a Newton-like algorithm for multi-agent learning problems based on the decomposition of the dynamics of the system in its irrotational (Potential) and solenoidal (Hamiltonian) component. This method ensures quadratic convergence in purely irrotational systems and pure solenoidal systems. Furthermore, we show that NOHD is attracted to stable fixed points in general multi-agent systems and repelled by strict saddle ones. Finally, we empirically compare the NOHD's performance with that of state-of-the-art algorithms on some bimatrix games and continuous Gridworlds environment.
How to Cite
Ramponi, G., & Restelli, M. (2021). Newton Optimization on Helmholtz Decomposition for Continuous Games. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11325-11333. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17350
AAAI Technical Track on Multiagent Systems