Admissible Policy Teaching through Reward Design
Keywords:Machine Learning (ML)
AbstractWe study reward design strategies for incentivizing a reinforcement learning agent to adopt a policy from a set of admissible policies. The goal of the reward designer is to modify the underlying reward function cost-efficiently while ensuring that any approximately optimal deterministic policy under the new reward function is admissible and performs well under the original reward function. This problem can be viewed as a dual to the problem of optimal reward poisoning attacks: instead of forcing an agent to adopt a specific policy, the reward designer incentivizes an agent to avoid taking actions that are inadmissible in certain states. Perhaps surprisingly, and in contrast to the problem of optimal reward poisoning attacks, we first show that the reward design problem for admissible policy teaching is computationally challenging, and it is NP-hard to find an approximately optimal reward modification. We then proceed by formulating a surrogate problem whose optimal solution approximates the optimal solution to the reward design problem in our setting, but is more amenable to optimization techniques and analysis. For this surrogate problem, we present characterization results that provide bounds on the value of the optimal solution. Finally, we design a local search algorithm to solve the surrogate problem and showcase its utility using simulation-based experiments.
How to Cite
Banihashem, K., Singla, A., Gan, J., & Radanovic, G. (2022). Admissible Policy Teaching through Reward Design. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6037-6045. https://doi.org/10.1609/aaai.v36i6.20550
AAAI Technical Track on Machine Learning I