Task-Driven Causal Feature Distillation: Towards Trustworthy Risk Prediction

Authors

  • Zhixuan Chu Ant Group
  • Mengxuan Hu University of Virginia
  • Qing Cui Ant Group
  • Longfei Li Ant Group
  • Sheng Li University of Virginia

DOI:

https://doi.org/10.1609/aaai.v38i10.29047

Keywords:

ML: Classification and Regression, APP: Other Applications, ML: Ethics, Bias, and Fairness

Abstract

Since artificial intelligence has seen tremendous recent successes in many areas, it has sparked great interest in its potential for trustworthy and interpretable risk prediction. However, most models lack causal reasoning and struggle with class imbalance, leading to poor precision and recall. To address this, we propose a Task-Driven Causal Feature Distillation model (TDCFD) to transform original feature values into causal feature attributions for the specific risk prediction task. The causal feature attribution helps describe how much contribution the value of this feature can make to the risk prediction result. After the causal feature distillation, a deep neural network is applied to produce trustworthy prediction results with causal interpretability and high precision/recall. We evaluate the performance of our TDCFD method on several synthetic and real datasets, and the results demonstrate its superiority over the state-of-the-art methods regarding precision, recall, interpretability, and causality.

Published

2024-03-24

How to Cite

Chu, Z., Hu, M., Cui, Q., Li, L., & Li, S. (2024). Task-Driven Causal Feature Distillation: Towards Trustworthy Risk Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11642-11650. https://doi.org/10.1609/aaai.v38i10.29047

Issue

Section

AAAI Technical Track on Machine Learning I