Learning in Repeated Games with Minimal Information: The Effects of Learning Bias

Authors

  • Jacob Crandall Masdar Institute of Science and Technology
  • Asad Ahmed Masdar Institute of Science and Technology
  • Michael Goodrich Brigham Young University

DOI:

https://doi.org/10.1609/aaai.v25i1.7871

Abstract

Automated agents for electricity markets, social networks, and other distributed networks must repeatedly interact with other intelligent agents, often without observing associates' actions or payoffs (i.e., minimal information). Given this reality, our goal is to create algorithms that learn effectively in repeated games played with minimal information. As in other applications of machine learning, the success of a learning algorithm in repeated games depends on its learning bias. To better understand what learning biases are most successful, we analyze the learning biases of previously published multi-agent learning (MAL) algorithms. We then describe a new algorithm that adapts a successful learning bias from the literature to minimal information environments. Finally, we compare the performance of this algorithm with ten other algorithms in repeated games played with minimal information.

Downloads

Published

2011-08-04

How to Cite

Crandall, J., Ahmed, A., & Goodrich, M. (2011). Learning in Repeated Games with Minimal Information: The Effects of Learning Bias. Proceedings of the AAAI Conference on Artificial Intelligence, 25(1), 650-656. https://doi.org/10.1609/aaai.v25i1.7871

Issue

Section

AAAI Technical Track: Multiagent Systems