TEAMSTER: Model-Based Reinforcement Learning for Ad Hoc Teamwork (Abstract Reprint)

Authors

  • João G. Ribeiro INESC-ID, IST Taguspark, Av. Prof. Dr. Cavaco Silva, Porto Salvo, 2744-016, Portugal
  • Gonçalo Rodrigues Google, Google Building 110, Brandschenkestrasse 110, Zürich, 8002, Switzerland
  • Alberto Sardinha INESC-ID, IST Taguspark, Av. Prof. Dr. Cavaco Silva, Porto Salvo, 2744-016, Portugal Department of Informatics, Pontifical Catholic University of Rio de Janeiro, Brazil
  • Francisco S. Melo INESC-ID, IST Taguspark, Av. Prof. Dr. Cavaco Silva, Porto Salvo, 2744-016, Portugal

DOI:

https://doi.org/10.1609/aaai.v38i20.30608

Keywords:

Journal Track

Abstract

This paper investigates the use of model-based reinforcement learning in the context of ad hoc teamwork. We introduce a novel approach, named TEAMSTER, where we propose learning both the environment's model and the model of the teammates' behavior separately. Compared to the state-of-the-art PLASTIC algorithms, our results in four different domains from the multi-agent systems literature show that TEAMSTER is more flexible than the PLASTIC-Model, by learning the environment's model instead of assuming a perfect hand-coded model, and more robust/efficient than PLASTIC-Policy, by being able to continuously adapt to newly encountered teams, without implicitly learning a new environment model from scratch.

Downloads

Published

2024-03-24

How to Cite

Ribeiro, J. G., Rodrigues, G., Sardinha, A., & Melo, F. S. (2024). TEAMSTER: Model-Based Reinforcement Learning for Ad Hoc Teamwork (Abstract Reprint). Proceedings of the AAAI Conference on Artificial Intelligence, 38(20), 22708-22708. https://doi.org/10.1609/aaai.v38i20.30608