Learning to Select from Multiple Options
DOI:
https://doi.org/10.1609/aaai.v37i11.26500Keywords:
SNLP: Applications, SNLP: Sentence-Level Semantics and Textual Inference, SNLP: Text ClassificationAbstract
Many NLP tasks can be regarded as a selection problem from a set of options, such as classification tasks, multi-choice question answering, etc. Textual entailment (TE) has been shown as the state-of-the-art (SOTA) approach to dealing with those selection problems. TE treats input texts as premises (P), options as hypotheses (H), then handles the selection problem by modeling (P, H) pairwise. Two limitations: first, the pairwise modeling is unaware of other options, which is less intuitive since humans often determine the best options by comparing competing candidates; second, the inference process of pairwise TE is time-consuming, especially when the option space is large. To deal with the two issues, this work first proposes a contextualized TE model (Context-TE) by appending other k options as the context of the current (P, H) modeling. Context-TE is able to learn more reliable decision for the H since it considers various context. Second, we speed up Context-TE by coming up with Parallel-TE, which learns the decisions of multiple options simultaneously. Parallel-TE significantly improves the inference speed while keeping comparable performance with Context-TE. Our methods are evaluated on three tasks (ultra-fine entity typing, intent detection and multi-choice QA) that are typical selection problems with different sizes of options. Experiments show our models set new SOTA performance; particularly, Parallel-TE is faster than the pairwise TE by k times in inference.Downloads
Published
2023-06-26
How to Cite
Du, J., Yin, W., Xia, C., & Yu, P. S. (2023). Learning to Select from Multiple Options. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12754-12762. https://doi.org/10.1609/aaai.v37i11.26500
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing