Incentive-Aware PAC Learning

Authors

  • Hanrui Zhang Duke University
  • Vincent Conitzer Duke University

DOI:

https://doi.org/10.1609/aaai.v35i6.16726

Keywords:

Mechanism Design, Learning Theory

Abstract

We study PAC learning in the presence of strategic manipulation, where data points may modify their features in certain predefined ways in order to receive a better outcome. We show that the vanilla ERM principle fails to achieve any nontrivial guarantee in this context. Instead, we propose an incentive-aware version of the ERM principle which has asymptotically optimal sample complexity. We then focus our attention on incentive-compatible classifiers, which provably prevent any kind of strategic manipulation. We give a sample complexity bound that is, curiously, independent of the hypothesis class, for the ERM principle restricted to incentive-compatible classifiers. This suggests that incentive compatibility alone can act as an effective means of regularization. We further show that it is without loss of generality to consider only incentive-compatible classifiers when opportunities for strategic manipulation satisfy a transitivity condition. As a consequence, in such cases, our hypothesis-class-independent sample complexity bound applies even without incentive compatibility. Our results set the foundations of incentive-aware PAC learning.

Downloads

Published

2021-05-18

How to Cite

Zhang, H., & Conitzer, V. (2021). Incentive-Aware PAC Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(6), 5797-5804. https://doi.org/10.1609/aaai.v35i6.16726

Issue

Section

AAAI Technical Track on Game Theory and Economic Paradigms