Merging Statistical Feature via Adaptive Gate for Improved Text Classification

Authors

  • Xianming Li Ant Group
  • Zongxi Li Department of Computer Science, City University of Hong Kong
  • Haoran Xie Lingnan University
  • Qing Li The Hong Kong Polytechnic University

Keywords:

Text Classification & Sentiment Analysis

Abstract

Currently, text classification studies mainly focus on training classifiers by using textual input only, or enhancing semantic features by introducing external knowledge (e.g., hand-craft lexicons and domain knowledge). In contrast, some intrinsic statistical features of the corpus, like word frequency and distribution over labels, are not well exploited. Compared with external knowledge, the statistical features are deterministic and naturally compatible with corresponding tasks. In this paper, we propose an Adaptive Gate Network (AGN) to consolidate semantic representation with statistical features selectively. In particular, AGN encodes statistical features through a variational component and merges information via a well-designed valve mechanism. The valve adapts the information flow into the classifier according to the confidence of semantic features in decision making, which can facilitate training a robust classifier and can address the overfitting caused by using statistical features. Extensive experiments on datasets of various scales show that, by incorporating statistical information, AGN can improve the classification performance of CNN, RNN, Transformer, and Bert based models effectively. The experiments also indicate the robustness of AGN against adversarial attacks of manipulating statistical information.

Downloads

Published

2021-05-18

How to Cite

Li, X., Li, Z., Xie, H., & Li, Q. (2021). Merging Statistical Feature via Adaptive Gate for Improved Text Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13288-13296. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17569

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II