Model-Based Reinforcement Learning with Multinomial Logistic Function Approximation

Authors

  • Taehyun Hwang Seoul National University
  • Min-hwan Oh Seoul National University

DOI:

https://doi.org/10.1609/aaai.v37i7.25964

Keywords:

ML: Reinforcement Learning Theory, ML: Reinforcement Learning Algorithms

Abstract

We study model-based reinforcement learning (RL) for episodic Markov decision processes (MDP) whose transition probability is parametrized by an unknown transition core with features of state and action. Despite much recent progress in analyzing algorithms in the linear MDP setting, the understanding of more general transition models is very restrictive. In this paper, we propose a provably efficient RL algorithm for the MDP whose state transition is given by a multinomial logistic model. We show that our proposed algorithm based on the upper confidence bounds achieves O(d√(H^3 T)) regret bound where d is the dimension of the transition core, H is the horizon, and T is the total number of steps. To the best of our knowledge, this is the first model-based RL algorithm with multinomial logistic function approximation with provable guarantees. We also comprehensively evaluate our proposed algorithm numerically and show that it consistently outperforms the existing methods, hence achieving both provable efficiency and practical superior performance.

Downloads

Published

2023-06-26

How to Cite

Hwang, T., & Oh, M.- hwan. (2023). Model-Based Reinforcement Learning with Multinomial Logistic Function Approximation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 7971-7979. https://doi.org/10.1609/aaai.v37i7.25964

Issue

Section

AAAI Technical Track on Machine Learning II