Algorithmic Decision-Making under Agents with Persistent Improvement
DOI:
https://doi.org/10.1609/aies.v7i1.31756Abstract
This paper studies algorithmic decision-making under human strategic behavior, where a decision-maker uses an algorithm to make decisions about human agents, and the latter with information about the algorithm may exert effort strategically and improve to receive favorable decisions. Unlike prior works that assume agents benefit from their efforts immediately, we consider realistic scenarios where the impacts of these efforts are persistent and agents benefit from efforts by making improvements gradually. We first develop a dynamic model to characterize persistent improvements and based on this construct a Stackelberg game to model the interplay between agents and the decision-maker. We analytically characterize the equilibrium strategies and identify conditions under which agents have incentives to invest efforts to improve their qualifications. With the dynamics, we then study how the decision-maker can design an optimal policy to incentivize the largest improvements inside the agent population. We also extend the model to settings where 1) agents may be dishonest and game the algorithm into making favorable but erroneous decisions; 2) honest efforts are forgettable and not sufficient to guarantee persistent improvements. With the extended models, we further examine conditions under which agents prefer honest efforts over dishonest behavior and the impacts of forgettable efforts.Downloads
Published
2024-10-16
How to Cite
Xie, T., Tan, X., & Zhang, X. (2024). Algorithmic Decision-Making under Agents with Persistent Improvement. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1672-1683. https://doi.org/10.1609/aies.v7i1.31756
Issue
Section
Full Archival Papers