Modifying RL Policies with Imagined Actions: How Predictable Policies Can Enable Users to Perform Novel Tasks

Authors

  • Isaac Sheidlower Tufts University, Department of Computer Science
  • Reuben Aronson Tufts University, Department of Computer Science
  • Elaine Short Tufts University, Department of Computer Science

DOI:

https://doi.org/10.1609/aaaiss.v2i1.27670

Keywords:

Human-robot Interaction, User-centered Learning, Shared Control

Abstract

It is crucial that users are empowered to use the functionalities of a robot to creatively solve problems on the fly. A user who has access to a Reinforcement Learning (RL) based robot may want to use the robot's autonomy and their knowledge of its behavior to complete new tasks. One way is for the user to take control of some of the robot's action space through teleoperation while the RL policy simultaneously controls the rest. However, an out-of-the-box RL policy may not readily facilitate this. For example, a user's control may bring the robot into a failure state from the policy's perspective, causing it to act in a way the user is not familiar with, hindering the success of the user's desired task. In this work, we formalize this problem and present Imaginary Out-of-Distribution Actions, IODA, an initial algorithm for addressing that problem and empowering user's to leverage their expectation of a robot's behavior to accomplish new tasks.

Downloads

Published

2024-01-22

Issue

Section

Artificial Intelligence for Human-Robot Interaction (AI-HRI)