Towards Multiagent Meta-level Control

Authors

  • Shanjun Cheng The University of North Carolina at Charlotte
  • Anita Raja The University of North Carolina at Charlotte
  • Victor Lesser University of Massachusetts Amherst

DOI:

https://doi.org/10.1609/aaai.v24i1.7788

Abstract

Embedded systems consisting of collaborating agents capable of interacting with their environment are becoming ubiquitous. It is crucial for these systems to be able to adapt to the dynamic and uncertain characteristics of an open environment. In this paper, we argue that multiagent meta-level control (MMLC) is an effective way to determine when this adaptation process should be done and how much effort should be invested in adaptation as opposed to continuing with the current action plan. We describe a reinforcement learning based approach to learn decentralized meta-control policies offline. We then propose to use the learned reward model as input to a global optimization algorithm to avoid conflicting meta-level decisions between coordinating agents. Our initial experiments in the context of NetRads, a multiagent tornado tracking application show that MMLC significantly improves performance in a 3-agent network.

Downloads

Published

2010-07-05

How to Cite

Cheng, S., Raja, A., & Lesser, V. (2010). Towards Multiagent Meta-level Control. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1), 1925-1926. https://doi.org/10.1609/aaai.v24i1.7788