What about Inputting Policy in Value Function: Policy Representation and Policy-Extended Value Function Approximator

Authors

  • Hongyao Tang College of Intelligence and Computing, Tianjin University
  • Zhaopeng Meng College of Intelligence and Computing, Tianjin University
  • Jianye Hao College of Intelligence and Computing, Tianjin University
  • Chen Chen Noah’s Ark Lab, Huawei
  • Daniel Graves Noah’s Ark Lab, Huawei
  • Dong Li Noah’s Ark Lab, Huawei
  • Changmin Yu Gatsby Computational Neuroscience Unit, University College London
  • Hangyu Mao Noah’s Ark Lab, Huawei
  • Wulong Liu Noah’s Ark Lab, Huawei
  • Yaodong Yang College of Intelligence and Computing, Tianjin University
  • Wenyuan Tao College of Intelligence and Computing, Tianjin University
  • Li Wang College of Intelligence and Computing, Tianjin University

DOI:

https://doi.org/10.1609/aaai.v36i8.20820

Keywords:

Machine Learning (ML)

Abstract

We study Policy-extended Value Function Approximator (PeVFA) in Reinforcement Learning (RL), which extends conventional value function approximator (VFA) to take as input not only the state (and action) but also an explicit policy representation. Such an extension enables PeVFA to preserve values of multiple policies at the same time and brings an appealing characteristic, i.e., value generalization among policies. We formally analyze the value generalization under Generalized Policy Iteration (GPI). From theoretical and empirical lens, we show that generalized value estimates offered by PeVFA may have lower initial approximation error to true values of successive policies, which is expected to improve consecutive value approximation during GPI. Based on above clues, we introduce a new form of GPI with PeVFA which leverages the value generalization along policy improvement path. Moreover, we propose a representation learning framework for RL policy, providing several approaches to learn effective policy embeddings from policy network parameters or state-action pairs. In our experiments, we evaluate the efficacy of value generalization offered by PeVFA and policy representation learning in several OpenAI Gym continuous control tasks. For a representative instance of algorithm implementation, Proximal Policy Optimization (PPO) re-implemented under the paradigm of GPI with PeVFA achieves about 40% performance improvement on its vanilla counterpart in most environments.

Downloads

Published

2022-06-28

How to Cite

Tang, H., Meng, Z., Hao, J., Chen, C., Graves, D., Li, D., Yu, C., Mao, H., Liu, W., Yang, Y., Tao, W., & Wang, L. (2022). What about Inputting Policy in Value Function: Policy Representation and Policy-Extended Value Function Approximator. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8441-8449. https://doi.org/10.1609/aaai.v36i8.20820

Issue

Section

AAAI Technical Track on Machine Learning III