Achieving Fairness in the Stochastic Multi-Armed Bandit Problem


  • Vishakha Patil Indian Institute of Science
  • Ganesh Ghalme Indian Institute of Science
  • Vineet Nair Indian Institute of Science
  • Y. Narahari Indian Institute of Science



We study an interesting variant of the stochastic multi-armed bandit problem, which we call the Fair-MAB problem, where, in addition to the objective of maximizing the sum of expected rewards, the algorithm also needs to ensure that at any time, each arm is pulled at least a pre-specified fraction of times. We investigate the interplay between learning and fairness in terms of a pre-specified vector denoting the fractions of guaranteed pulls. We define a fairness-aware regret, which we call r-Regret, that takes into account the above fairness constraints and extends the conventional notion of regret in a natural way. Our primary contribution is to obtain a complete characterization of a class of Fair-MAB algorithms via two parameters: the unfairness tolerance and the learning algorithm used as a black-box. For this class of algorithms, we provide a fairness guarantee that holds uniformly over time, irrespective of the choice of the learning algorithm. Further, when the learning algorithm is UCB1, we show that our algorithm achieves constant r-Regret for a large enough time horizon. Finally, we analyze the cost of fairness in terms of the conventional notion of regret. We conclude by experimentally validating our theoretical results.




How to Cite

Patil, V., Ghalme, G., Nair, V., & Narahari, Y. (2020). Achieving Fairness in the Stochastic Multi-Armed Bandit Problem. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5379-5386.



AAAI Technical Track: Machine Learning