Reinforced History Backtracking for Conversational Question Answering
DOI:
https://doi.org/10.1609/aaai.v35i15.17617Keywords:
Applications, Conversational AI/Dialog SystemsAbstract
To model the context history in multi-turn conversations has become a critical step towards a better understanding of the user query in question answering systems. To utilize the context history, most existing studies treat the whole context as input, which will inevitably face the following two challenges. First, modeling a long history can be costly as it requires more computation resources. Second, the long context history consists of a lot of irrelevant information that makes it difficult to model appropriate information relevant to the user query. To alleviate these problems, we propose a reinforcement learning based method to capture and backtrack the related conversation history to boost model performance in this paper. Our method seeks to automatically backtrack the history information with the implicit feedback from the model performance. We further consider both immediate and delayed rewards to guide the reinforced backtracking policy. Extensive experiments on a large conversational question answering dataset show that the proposed method can help to alleviate the problems arising from longer context history. Meanwhile, experiments show that the method yields better performance than other strong baselines, and the actions made by the method are insightful.Downloads
Published
2021-05-18
How to Cite
Qiu, M., Huang, X., Chen, C., Ji, F., Qu, C., Wei, W., Huang, J., & Zhang, Y. (2021). Reinforced History Backtracking for Conversational Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13718-13726. https://doi.org/10.1609/aaai.v35i15.17617
Issue
Section
AAAI Technical Track on Speech and Natural Language Processing II