Quantum Multi-Agent Meta Reinforcement Learning
DOI:
https://doi.org/10.1609/aaai.v37i9.26313Keywords:
ML: Quantum Machine Learning, ML: Lifelong and Continual Learning, ML: Meta Learning, ML: Reinforcement Learning Algorithms, ML: Reinforcement Learning TheoryAbstract
Although quantum supremacy is yet to come, there has recently been an increasing interest in identifying the potential of quantum machine learning (QML) in the looming era of practical quantum computing. Motivated by this, in this article we re-design multi-agent reinforcement learning (MARL) based on the unique characteristics of quantum neural networks (QNNs) having two separate dimensions of trainable parameters: angle parameters affecting the output qubit states, and pole parameters associated with the output measurement basis. Exploiting this dyadic trainability as meta-learning capability, we propose quantum meta MARL (QM2ARL) that first applies angle training for meta-QNN learning, followed by pole training for few-shot or local-QNN training. To avoid overfitting, we develop an angle-to-pole regularization technique injecting noise into the pole domain during angle training. Furthermore, by exploiting the pole as the memory address of each trained QNN, we introduce the concept of pole memory allowing one to save and load trained QNNs using only two-parameter pole values. We theoretically prove the convergence of angle training under the angle-to-pole regularization, and by simulation corroborate the effectiveness of QM2ARL in achieving high reward and fast convergence, as well as of the pole memory in fast adaptation to a time-varying environment.Downloads
Published
2023-06-26
How to Cite
Yun, W. J., Park, J., & Kim, J. (2023). Quantum Multi-Agent Meta Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11087-11095. https://doi.org/10.1609/aaai.v37i9.26313
Issue
Section
AAAI Technical Track on Machine Learning IV