A Simple and Effective Self-Supervised Contrastive Learning Framework for Aspect Detection
Keywords:Information Extraction, Text Classification & Sentiment Analysis, Neural Generative Models & Autoencoders
AbstractUnsupervised aspect detection (UAD) aims at automatically extracting interpretable aspects and identifying aspect-specific segments (such as sentences) from online reviews. However, recent deep learning based topic models, specifically aspect-based autoencoder, suffer from several problems such as extracting noisy aspects and poorly mapping aspects discovered by models to the aspects of interest. To tackle these challenges, in this paper, we first propose a self-supervised contrastive learning framework and an attention-based model equipped with a novel smooth self-attention (SSA) module for the UAD task in order to learn better representations for aspects and review segments. Secondly, we introduce a high-resolution selective mapping (HRSMap) method to efficiently assign aspects discovered by the model to the aspects of interest. We also propose using a knowledge distillation technique to further improve the aspect detection performance. Our methods outperform several recent unsupervised and weakly supervised approaches on publicly available benchmark user review datasets. Aspect interpretation results show that extracted aspects are meaningful, have a good coverage, and can be easily mapped to aspects of interest. Ablation studies and attention weight visualization also demonstrate effectiveness of SSA and the knowledge distillation method.
How to Cite
Shi, T., Li, L., Wang, P., & Reddy, C. K. (2021). A Simple and Effective Self-Supervised Contrastive Learning Framework for Aspect Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13815-13824. https://doi.org/10.1609/aaai.v35i15.17628
AAAI Technical Track on Speech and Natural Language Processing II