State Abstraction as Compression in Apprenticeship Learning


  • David Abel Brown University
  • Dilip Arumugam Stanford University
  • Kavosh Asadi Brown University
  • Yuu Jinnai Brown University
  • Michael L. Littman Brown University
  • Lawson L.S. Wong Northeastern University



State abstraction can give rise to models of environments that are both compressed and useful, thereby enabling efficient sequential decision making. In this work, we offer the first formalism and analysis of the trade-off between compression and performance made in the context of state abstraction for Apprenticeship Learning. We build on Rate-Distortion theory, the classic Blahut-Arimoto algorithm, and the Information Bottleneck method to develop an algorithm for computing state abstractions that approximate the optimal tradeoff between compression and performance. We illustrate the power of this algorithmic structure to offer insights into effective abstraction, compression, and reinforcement learning through a mixture of analysis, visuals, and experimentation.




How to Cite

Abel, D., Arumugam, D., Asadi, K., Jinnai, Y., Littman, M. L., & Wong, L. L. (2019). State Abstraction as Compression in Apprenticeship Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3134-3142.



AAAI Technical Track: Machine Learning