Using Bisimulation for Policy Transfer in MDPs


  • Pablo Castro McGill University
  • Doina Precup McGill University



Markov Decision Processes, Knowledge transfer, Bisimulation metrics, Options


Knowledge transfer has been suggested as a useful approach for solving large Markov Decision Processes. The main idea is to compute a decision-making policy in one environment and use it in a different environment, provided the two are ”close enough”. In this paper, we use bisimulation-style metrics (Ferns et al., 2004) to guide knowledge transfer. We propose algorithms that decide what actions to transfer from the policy computed on a small MDP task to a large task, given the bisimulation distance between states in the two tasks. We demonstrate the inherent ”pessimism” of bisimulation metrics and present variants of this metric aimed to overcome this pessimism, leading to improved action transfer. We also show that using this approach for transferring temporally extended actions (Sutton et al., 1999) is more successful than using it exclusively with primitive actions. We present theoretical guarantees on the quality of the transferred policy, as well as promising empirical results.




How to Cite

Castro, P., & Precup, D. (2010). Using Bisimulation for Policy Transfer in MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1), 1065-1070.



Reasoning about Plans, Processes and Actions