Non-linear Welfare-Aware Strategic Learning

Authors

  • Tian Xie the Ohio State University
  • Xueru Zhang the Ohio State University

DOI:

https://doi.org/10.1609/aies.v7i1.31755

Abstract

This paper studies algorithmic decision-making in the presence of strategic individual behaviors, where an ML model is used to make decisions about human agents and the latter can adapt their behavior strategically to improve their future data. Existing results on strategic learning have largely focused on the linear setting where agents with linear labeling functions best respond to a (noisy) linear decision policy. Instead, this work focuses on general non-linear settings where agents respond to the decision policy with only "local information" of the policy. Moreover, we simultaneously consider objectives of maximizing decision-maker welfare (model prediction accuracy), social welfare (agent improvement caused by strategic behaviors), and agent welfare (the extent that ML underestimates the agents). We first generalize the agent best response model in previous works to the non-linear setting and then investigate the compatibility of welfare objectives. We show the three welfare can attain the optimum simultaneously only under restrictive conditions which are challenging to achieve in non-linear settings. The theoretical results imply that existing works solely maximizing the welfare of a subset of parties usually diminish the welfare of others. We thus claim the necessity of balancing the welfare of each party in non-linear settings and propose an irreducible optimization algorithm suitable for general strategic learning. Experiments on synthetic and real data validate the proposed algorithm.

Downloads

Published

2024-10-16

How to Cite

Xie, T., & Zhang, X. (2024). Non-linear Welfare-Aware Strategic Learning. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1660-1671. https://doi.org/10.1609/aies.v7i1.31755