Multirobot Coordination for Space Exploration

Authors

  • Logan Yliniemi Oregon State University
  • Adrian K. Agogino
  • Kagan Tumer Oregon State University

DOI:

https://doi.org/10.1609/aimag.v35i4.2556

Abstract

Teams of artificially intelligent planetary rovers have tremendous potential for space exploration, allowing for reduced cost, increased flexibility and increased reliability. However, having these multiple autonomous devices acting simultaneously leads to a problem of coordination: to achieve the best results, the they should work together. This is not a simple task. Due to the large distances and harsh environments, a rover must be able to perform a wide variety of tasks with a wide variety of potential teammates in uncertain and unsafe environments. Directly coding all the necessary rules that can reliably handle all of this coordination and uncertainty is problematic. Instead, this article examines tackling this problem through the use of coordinated reinforcement learning: rather than being programmed what to do, the rovers iteratively learn through trial and error to take take actions that lead to high overall system return. To allow for coordination, yet allow each agent to learn and act independently, we employ state-of-the-art reward shaping techniques. This article uses visualization techniques to break down complex performance indicators into an accessible form, and identifies key future research directions.

Author Biography

Adrian K. Agogino

University of California, Santa Cruz 

Downloads

Published

2014-12-22

How to Cite

Yliniemi, L., Agogino, A. K., & Tumer, K. (2014). Multirobot Coordination for Space Exploration. AI Magazine, 35(4), 61-74. https://doi.org/10.1609/aimag.v35i4.2556

Issue

Section

Articles