Learning to Map Frequent Phrases to Sub-Structures of Meaning Representation for Neural Semantic Parsing

Authors

  • Bo Chen Chinese Academy of Sciences
  • Xianpei Han Chinese Academy of Sciences
  • Ben He University of Chinese Academy of Sciences
  • Le Sun Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v34i05.6253

Abstract

Neural semantic parsers usually generate meaning representation tokens from natural language tokens via an encoder-decoder model. However, there is often a vocabulary-mismatch problem between natural language utterances and logical forms. That is, one word maps to several atomic logical tokens, which need to be handled as a whole, rather than individual logical tokens at multiple steps. In this paper, we propose that the vocabulary-mismatch problem can be effectively resolved by leveraging appropriate logical tokens. Specifically, we exploit macro actions, which are of the same granularity of words/phrases, and allow the model to learn mappings from frequent phrases to corresponding sub-structures of meaning representation. Furthermore, macro actions are compact, and therefore utilizing them can significantly reduce the search space, which brings a great benefit to weakly supervised semantic parsing. Experiments show that our method leads to substantial performance improvement on three benchmarks, in both supervised and weakly supervised settings.

Downloads

Published

2020-04-03

How to Cite

Chen, B., Han, X., He, B., & Sun, L. (2020). Learning to Map Frequent Phrases to Sub-Structures of Meaning Representation for Neural Semantic Parsing. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7546-7553. https://doi.org/10.1609/aaai.v34i05.6253

Issue

Section

AAAI Technical Track: Natural Language Processing