Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference

Authors

  • Galen Weld University of Washington
  • Peter West University of Washington
  • Maria Glenski Pacific Northwest National Laboratory
  • David Arbour Adobe Research
  • Ryan A. Rossi Adobe Research
  • Tim Althoff University of Washington

Keywords:

Web and Social Media

Abstract

Causal inference studies using textual social media data can provide actionable insights on human behavior. Making accurate causal inferences with text requires controlling for confounding which could otherwise impart bias. Recently, many different methods for adjusting for confounders have been proposed, and we show that these existing methods disagree with one another on two datasets inspired by previous social media studies. Evaluating causal methods is challenging, as ground truth counterfactuals are almost never available. Presently, no empirical evaluation framework for causal methods using text exists, and as such, practitioners must select their methods without guidance. We contribute the first such framework, which consists of five tasks drawn from real world studies. Our framework enables the evaluation of any casual inference method using text. Across 648 experiments and two datasets, we evaluate every commonly used causal inference method and identify their strengths and weaknesses to inform social media researchers seeking to use such methods, and guide future improvements. We make all tasks, data, and models public to inform applications and encourage additional research.

Downloads

Published

2022-05-31

How to Cite

Weld, G., West, P., Glenski, M., Arbour, D., Rossi, R. A., & Althoff, T. (2022). Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference. Proceedings of the International AAAI Conference on Web and Social Media, 16(1), 1109-1120. Retrieved from https://ojs.aaai.org/index.php/ICWSM/article/view/19362