Few-Shot Bayesian Imitation Learning with Logical Program Policies


  • Tom Silver MIT
  • Kelsey R. Allen MIT
  • Alex K. Lew MIT
  • Leslie Pack Kaelbling MIT
  • Josh Tenenbaum MIT




Humans can learn many novel tasks from a very small number (1–5) of demonstrations, in stark contrast to the data requirements of nearly tabula rasa deep learning methods. We propose an expressive class of policies, a strong but general prior, and a learning algorithm that, together, can learn interesting policies from very few examples. We represent policies as logical combinations of programs drawn from a domain-specific language (DSL), define a prior over policies with a probabilistic grammar, and derive an approximate Bayesian inference algorithm to learn policies from demonstrations. In experiments, we study six strategy games played on a 2D grid with one shared DSL. After a few demonstrations of each game, the inferred policies generalize to new game instances that differ substantially from the demonstrations. Our policy learning is 20–1,000x more data efficient than convolutional and fully convolutional policy learning and many orders of magnitude more computationally efficient than vanilla program induction. We argue that the proposed method is an apt choice for tasks that have scarce training data and feature significant, structured variation between task instances.




How to Cite

Silver, T., Allen, K. R., Lew, A. K., Pack Kaelbling, L., & Tenenbaum, J. (2020). Few-Shot Bayesian Imitation Learning with Logical Program Policies. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06), 10251-10258. https://doi.org/10.1609/aaai.v34i06.6587



AAAI Technical Track: Reasoning under Uncertainty