BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems
DOI:
https://doi.org/10.1609/aaai.v32i1.11946Keywords:
task-oriented dialogue, deep reinforcement learning, exploration, policy learningAbstract
We present a new algorithm that significantly improves the efficiency of exploration for deep Q-learning agents in dialogue systems. Our agents explore via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop neural network. Our algorithm learns much faster than common exploration strategies such as ε-greedy, Boltzmann, bootstrapping, and intrinsic-reward-based ones. Additionally, we show that spiking the replay buffer with experiences from just a few successful episodes can make Q-learning feasible when it might otherwise fail.
Downloads
Published
2018-04-27
How to Cite
Lipton, Z., Li, X., Gao, J., Li, L., Ahmed, F., & Deng, L. (2018). BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11946
Issue
Section
Main Track: NLP and Machine Learning