Search Action Sequence Modeling With Long Short-Term Memory for Search Task Success Evaluation

Authors

  • Alin Fan Zhejiang University
  • Ling Chen Zhejiang University
  • Gencai Chen Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v32i1.11844

Abstract

Search task success rate is a crucial metric based on the search experience of users to measure the performance of search systems. Modeling search action sequence would help to capture the latent search patterns of users in successful and unsuccessful search tasks. Existing approaches use aggregated features to describe the user behavior in search action sequences, which depend on heuristic hand-crafted feature design and ignore a lot of information inherent in the user behavior. In this paper, we employ Long Short-Term Memory (LSTM) that performs end-to-end fine-tuning during the training to learn search action sequence representation for search task success evaluation. Concretely, we normalize the search action sequences by introducing a dummy idle action, which guarantees that the time intervals between contiguous actions are fixed. Simultaneously, we propose a novel data augmentation strategy to increase the pattern variations on search action sequence data to improve the generalization ability of LSTM. We evaluate the proposed approach on open datasets with two different definitions of search task success. The experimental results show that the proposed approach achieves significant performance improvement compared with several excellent search task success evaluation approaches.

Downloads

Published

2018-04-26

How to Cite

Fan, A., Chen, L., & Chen, G. (2018). Search Action Sequence Modeling With Long Short-Term Memory for Search Task Success Evaluation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11844

Issue

Section

Main Track: Machine Learning Applications