Causal Prompting: Debiasing Large Language Model Prompting Based on Front-Door Adjustment

Authors

  • Congzhi Zhang School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China
  • Linhai Zhang Department of Informatics, King’s College London, UK
  • Jialong Wu School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China
  • Yulan He Department of Informatics, King’s College London, UK The Alan Turing Institute, UK
  • Deyu Zhou School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China

DOI:

https://doi.org/10.1609/aaai.v39i24.34777

Abstract

Despite the notable advancements of existing prompting methods, such as In-Context Learning and Chain-of-Thought for Large Language Models (LLMs), they still face challenges related to various biases. Traditional debiasing methods primarily focus on the model training stage, including approaches based on data augmentation and reweighting, yet they struggle with the complex biases inherent in LLMs. To address such limitations, the causal relationship behind the prompting methods is uncovered using a structural causal model, and a novel causal prompting method based on front-door adjustment is proposed to effectively mitigate LLMs biases. In specific, causal intervention is achieved by designing the prompts without accessing the parameters and logits of LLMs. The chain-of-thought generated by LLM is employed as the mediator variable and the causal effect between input prompts and output answers is calculated through front-door adjustment to mitigate model biases. Moreover, to accurately represent the chain-of-thoughts and estimate the causal effects, contrastive learning is used to fine-tune the encoder of chain-of-thought by aligning its space with that of the LLM. Experimental results show that the proposed causal prompting approach achieves excellent performance across seven natural language processing datasets on both open-source and closed-source LLMs.

Downloads

Published

2025-04-11

How to Cite

Zhang, C., Zhang, L., Wu, J., He, Y., & Zhou, D. (2025). Causal Prompting: Debiasing Large Language Model Prompting Based on Front-Door Adjustment. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 25842–25850. https://doi.org/10.1609/aaai.v39i24.34777

Issue

Section

AAAI Technical Track on Natural Language Processing III