Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification
DOI:
https://doi.org/10.1609/aaai.v36i10.21366Keywords:
Speech & Natural Language Processing (SNLP)Abstract
Recently, graph neural networks (GNNs) have been widely used for document classification. However, most existing methods are based on static word co-occurrence graphs without sentence-level information, which poses three challenges:(1) word ambiguity, (2) word synonymity, and (3) dynamic contextual dependency. To address these challenges, we propose a novel GNN-based sparse structure learning model for inductive document classification. Specifically, a document-level graph is initially generated by a disjoint union of sentence-level word co-occurrence graphs. Our model collects a set of trainable edges connecting disjoint words between sentences, and employs structure learning to sparsely select edges with dynamic contextual dependencies. Graphs with sparse structure can jointly exploit local and global contextual information in documents through GNNs. For inductive learning, the refined document graph is further fed into a general readout function for graph-level classification and optimization in an end-to-end manner. Extensive experiments on several real-world datasets demonstrate that the proposed model outperforms most state-of-the-art results, and reveal the necessity to learn sparse structures for each document.Downloads
Published
2022-06-28
How to Cite
Piao, Y., Lee, S., Lee, D., & Kim, S. (2022). Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11165-11173. https://doi.org/10.1609/aaai.v36i10.21366
Issue
Section
AAAI Technical Track on Speech and Natural Language Processing