Generate Your Counterfactuals: Towards Controlled Counterfactual Generation for Text


  • Nishtha Madaan IBM Research AI
  • Inkit Padhi IBM Research AI
  • Naveen Panwar IBM Research AI
  • Diptikalyan Saha IBM Research AI



Ethics -- Bias, Fairness, Transparency & Privac, Ethics -- Bias, Fairness, Transparency & Privacy, Text Classification & Sentiment Analysis, Interpretaility & Analysis of NLP Models


Machine Learning has seen tremendous growth recently, which has led to a larger adaptation of ML systems for educational assessments, credit risk, healthcare, employment, criminal justice, to name a few. The trustworthiness of ML and NLP systems is a crucial aspect and requires a guarantee that the decisions they make are fair and robust. Aligned with this, we propose a novel framework GYC, to generate a set of exhaustive counterfactual text, which are crucial for testing these ML systems. Our main contributions include a) We introduce GYC, a framework to generate counterfactual samples such that the generation is plausible, diverse, goal-oriented, and effective, b) We generate counterfactual samples, that can direct the generation towards a corresponding \texttt{condition} such as named-entity tag, semantic role label, or sentiment. Our experimental results on various domains show that GYC generates counterfactual text samples exhibiting the above four properties. GYC generates counterfactuals that can act as test cases to evaluate a model and any text debiasing algorithm.




How to Cite

Madaan, N., Padhi, I., Panwar, N., & Saha, D. (2021). Generate Your Counterfactuals: Towards Controlled Counterfactual Generation for Text. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13516-13524.



AAAI Technical Track on Speech and Natural Language Processing II