De-biased Attention Supervision for Text Classification with Causality
DOI:
https://doi.org/10.1609/aaai.v38i17.29897Keywords:
NLP: Interpretability, Analysis, and Evaluation of NLP Models, NLP: Text ClassificationAbstract
In text classification models, while the unsupervised attention mechanism can enhance performance, it often produces attention distributions that are puzzling to humans, such as assigning high weight to seemingly insignificant conjunctions. Recently, numerous studies have explored Attention Supervision (AS) to guide the model toward more interpretable attention distributions. However, such AS can impact classification performance, especially in specialized domains. In this paper, we address this issue from a causality perspective. Firstly, we leverage the causal graph to reveal two biases in the AS: 1) Bias caused by the label distribution of the dataset. 2) Bias caused by the words' different occurrence ranges that some words can occur across labels while others only occur in a particular label. We then propose a novel De-biased Attention Supervision (DAS) method to eliminate these biases with causal techniques. Specifically, we adopt backdoor adjustment on the label-caused bias and reduce the word-caused bias by subtracting the direct causal effect of the word. Through extensive experiments on two professional text classification datasets (e.g., medicine and law), we demonstrate that our method achieves improved classification accuracy along with more coherent attention distributions.Downloads
Published
2024-03-24
How to Cite
Wu, Y., Liu, Y., Zhao, Z., Lu, W., Zhang, Y., Sun, C., … Kuang, K. (2024). De-biased Attention Supervision for Text Classification with Causality. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19279–19287. https://doi.org/10.1609/aaai.v38i17.29897
Issue
Section
AAAI Technical Track on Natural Language Processing II