Fair Influence Maximization: a Welfare Optimization Approach

Authors

  • Aida Rahmattalabi University of Southern California
  • Shahin Jabbari Harvard University
  • Himabindu Lakkaraju Harvard University
  • Phebe Vayanos University of Southern California
  • Max Izenberg Pardee RAND Graduate School
  • Ryan Brown RAND Corporation
  • Eric Rice University of Southern California
  • Milind Tambe Harvard University

DOI:

https://doi.org/10.1609/aaai.v35i13.17383

Keywords:

Bias, Fairness & Equity, Social Networks, Societal Impact of AI

Abstract

Several behavioral, social, and public health interventions, such as suicide/HIV prevention or community preparedness against natural disasters, leverage social network information to maximize outreach. Algorithmic influence maximization techniques have been proposed to aid with the choice of ``peer leaders'' or ``influencers'' in such interventions. Yet, traditional algorithms for influence maximization have not been designed with these interventions in mind. As a result, they may disproportionately exclude minority communities from the benefits of the intervention. This has motivated research on fair influence maximization. Existing techniques come with two major drawbacks. First, they require committing to a single fairness measure. Second, these measures are typically imposed as strict constraints leading to undesirable properties such as wastage of resources. To address these shortcomings, we provide a principled characterization of the properties that a fair influence maximization algorithm should satisfy. In particular, we propose a framework based on social welfare theory, wherein the cardinal utilities derived by each community are aggregated using the isoelastic social welfare functions. Under this framework, the trade-off between fairness and efficiency can be controlled by a single inequality aversion design parameter. We then show under what circumstances our proposed principles can be satisfied by a welfare function. The resulting optimization problem is monotone and submodular and can be solved efficiently with optimality guarantees. Our framework encompasses as special cases leximin and proportional fairness. Extensive experiments on synthetic and real world datasets including a case study on landslide risk management demonstrate the efficacy of the proposed framework.

Downloads

Published

2021-05-18

How to Cite

Rahmattalabi, A., Jabbari, S., Lakkaraju, H., Vayanos, P., Izenberg, M., Brown, R., Rice, E., & Tambe, M. (2021). Fair Influence Maximization: a Welfare Optimization Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11630-11638. https://doi.org/10.1609/aaai.v35i13.17383

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI