Causally Denoise Word Embeddings Using Half-Sibling Regression

Authors

  • Zekun Yang City University of Hong Kong
  • Tianlin Liu Friedrich Miescher Institute for Biomedical Research

DOI:

https://doi.org/10.1609/aaai.v34i05.6485

Abstract

Distributional representations of words, also known as word vectors, have become crucial for modern natural language processing tasks due to their wide applications. Recently, a growing body of word vector postprocessing algorithm has emerged, aiming to render off-the-shelf word vectors even stronger. In line with these investigations, we introduce a novel word vector postprocessing scheme under a causal inference framework. Concretely, the postprocessing pipeline is realized by Half-Sibling Regression (HSR), which allows us to identify and remove confounding noise contained in word vectors. Compared to previous work, our proposed method has the advantages of interpretability and transparency due to its causal inference grounding. Evaluated on a battery of standard lexical-level evaluation tasks and downstream sentiment analysis tasks, our method reaches state-of-the-art performance.

Downloads

Published

2020-04-03

How to Cite

Yang, Z., & Liu, T. (2020). Causally Denoise Word Embeddings Using Half-Sibling Regression. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9426-9433. https://doi.org/10.1609/aaai.v34i05.6485

Issue

Section

AAAI Technical Track: Natural Language Processing