Achieving Counterfactual Fairness for Causal Bandit

Authors

  • Wen Huang University of Arkansas
  • Lu Zhang University of Arkansas
  • Xintao Wu University of Arkansas

DOI:

https://doi.org/10.1609/aaai.v36i6.20653

Keywords:

Machine Learning (ML)

Abstract

In online recommendation, customers arrive in a sequential and stochastic manner from an underlying distribution and the online decision model recommends a chosen item for each arriving individual based on some strategy. We study how to recommend an item at each step to maximize the expected reward while achieving user-side fairness for customers, i.e., customers who share similar profiles will receive a similar reward regardless of their sensitive attributes and items being recommended. By incorporating causal inference into bandits and adopting soft intervention to model the arm selection strategy, we first propose the d-separation based UCB algorithm (D-UCB) to explore the utilization of the d-separation set in reducing the amount of exploration needed to achieve low cumulative regret. Based on that, we then propose the fair causal bandit (F-UCB) for achieving the counterfactual individual fairness. Both theoretical analysis and empirical evaluation demonstrate effectiveness of our algorithms.

Downloads

Published

2022-06-28

How to Cite

Huang, W., Zhang, L., & Wu, X. (2022). Achieving Counterfactual Fairness for Causal Bandit. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6952-6959. https://doi.org/10.1609/aaai.v36i6.20653

Issue

Section

AAAI Technical Track on Machine Learning I