Automatically Neutralizing Subjective Bias in Text


  • Reid Pryzant Stanford University
  • Richard Diehl Martinez Stanford University
  • Nathan Dass Stanford University
  • Sadao Kurohashi Kyoto University
  • Dan Jurafsky Stanford University
  • Diyi Yang Georgia Institute of Technology



Texts like news, encyclopedias, and some social media strive for objectivity. Yet bias in the form of inappropriate subjectivity — introducing attitudes via framing, presupposing truth, and casting doubt — remains ubiquitous. This kind of bias erodes our collective trust and fuels social conflict. To address this issue, we introduce a novel testbed for natural language generation: automatically bringing inappropriately subjective text into a neutral point of view (“neutralizing” biased text). We also offer the first parallel corpus of biased language. The corpus contains 180,000 sentence pairs and originates from Wikipedia edits that removed various framings, presuppositions, and attitudes from biased sentences. Last, we propose two strong encoder-decoder baselines for the task. A straightforward yet opaque concurrent system uses a BERT encoder to identify subjective words as part of the generation process. An interpretable and controllable modular algorithm separates these steps, using (1) a BERT-based classifier to identify problematic words and (2) a novel join embedding through which the classifier can edit the hidden states of the encoder. Large-scale human evaluation across four domains (encyclopedias, news headlines, books, and political speeches) suggests that these algorithms are a first step towards the automatic identification and reduction of bias.




How to Cite

Pryzant, R., Diehl Martinez, R., Dass, N., Kurohashi, S., Jurafsky, D., & Yang, D. (2020). Automatically Neutralizing Subjective Bias in Text. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 480-489.



AAAI Special Technical Track: AI for Social Impact