Online Platforms and the Fair Exposure Problem under Homophily

Authors

  • Jakob Schoeffer Karlsruhe Institute of Technology (KIT)
  • Alexander Ritchie University of Michigan
  • Keziah Naggita Toyota Technological Institute at Chicago
  • Faidra Monachou Harvard University
  • Jessica Finocchiaro Harvard University Center for Research on Computation and Society (CRCS)
  • Marc Juarez University of Edinburgh

DOI:

https://doi.org/10.1609/aaai.v37i10.26404

Keywords:

PEAI: Societal Impact of AI, PEAI: Bias, Fairness & Equity, GTEP: Applications, GTEP: Other Foundations of Game Theory & Economic Paradigms, MAS: Other Foundations of Multiagent Systems

Abstract

In the wake of increasing political extremism, online platforms have been criticized for contributing to polarization. One line of criticism has focused on echo chambers and the recommended content served to users by these platforms. In this work, we introduce the fair exposure problem: given limited intervention power of the platform, the goal is to enforce balance in the spread of content (e.g., news articles) among two groups of users through constraints similar to those imposed by the Fairness Doctrine in the United States in the past. Groups are characterized by different affiliations (e.g., political views) and have different preferences for content. We develop a stylized framework that models intra- and inter-group content propagation under homophily, and we formulate the platform's decision as an optimization problem that aims at maximizing user engagement, potentially under fairness constraints. Our main notion of fairness requires that each group see a mixture of their preferred and non-preferred content, encouraging information diversity. Promoting such information diversity is often viewed as desirable and a potential means for breaking out of harmful echo chambers. We study the solutions to both the fairness-agnostic and fairness-aware problems. We prove that a fairness-agnostic approach inevitably leads to group-homogeneous targeting by the platform. This is only partially mitigated by imposing fairness constraints: we show that there exist optimal fairness-aware solutions which target one group with different types of content and the other group with only one type that is not necessarily the group's most preferred. Finally, using simulations with real-world data, we study the system dynamics and quantify the price of fairness.

Downloads

Published

2023-06-26

How to Cite

Schoeffer, J., Ritchie, A., Naggita, K., Monachou, F., Finocchiaro, J., & Juarez, M. (2023). Online Platforms and the Fair Exposure Problem under Homophily. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 11899-11908. https://doi.org/10.1609/aaai.v37i10.26404

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI