All Should Be Equal in the Eyes of LMs: Counterfactually Aware Fair Text Generation


  • Pragyan Banerjee Indian Institute of Technology Guwahati
  • Abhinav Java MDSR Labs, Adobe
  • Surgan Jandial MDSR Labs, Adobe
  • Simra Shahid MDSR Labs, Adobe
  • Shaz Furniturewala Birla Institute of Technology and Science, Pilani
  • Balaji Krishnamurthy MDSR Labs, Adobe
  • Sumit Bhatia MDSR Labs, Adobe



NLP: Ethics -- Bias, Fairness, Transparency & Privacy, ML: Ethics, Bias, and Fairness, NLP: (Large) Language Models, NLP: Safety and Robustness


Fairness in Language Models (LMs) remains a long-standing challenge, given the inherent biases in training data that can be perpetuated by models and affect the downstream tasks. Recent methods employ expensive retraining or attempt debiasing during inference by constraining model outputs to contrast from a reference set of biased templates/exemplars. Regardless, they don’t address the primary goal of fairness to maintain equitability across different demographic groups. In this work, we posit that inferencing LMs to generate unbiased output for one demographic under a context ensues from being aware of outputs for other demographics under the same context. To this end, we propose Counterfactually Aware Fair InferencE (CAFIE), a framework that dynamically compares the model’s understanding of diverse demographics to generate more equitable sentences. We conduct an extensive empirical evaluation using base LMs of varying sizes and across three diverse datasets and found that CAFIE outperforms strong baselines. CAFIE produces fairer text and strikes the best balance between fairness and language modeling capability.




How to Cite

Banerjee, P., Java, A., Jandial, S., Shahid, S., Furniturewala, S., Krishnamurthy, B., & Bhatia, S. (2024). All Should Be Equal in the Eyes of LMs: Counterfactually Aware Fair Text Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17673-17681.



AAAI Technical Track on Natural Language Processing I