Optimally Auditing Adversarial Agents

Authors

  • Sanmay Das Virginia Polytechnic Institute and State University
  • Fang-Yi Yu George Mason University
  • Yuang Zhang George Mason University

DOI:

https://doi.org/10.1609/aaai.v40i20.38722

Abstract

Fraud can pose a challenge in many resource allocation domains, including social service delivery and credit provision. For example, agents may misreport private information in order to gain benefits or access to credit. To mitigate this, a principal can design strategic audits to verify claims and penalize misreporting. In this paper, we introduce a general model of audit policy design as a principal-agent game with multiple agents, where the principal commits to an audit policy, and agents collectively choose an equilibrium that minimizes the principal’s utility. We examine both adaptive and non-adaptive settings, depending on whether the principal's policy can be responsive to the distribution of agent reports. Our work provides efficient algorithms for computing optimal audit policies in both settings and extends these results to a setting with limited audit budgets.

Downloads

Published

2026-03-14

How to Cite

Das, S., Yu, F.-Y., & Zhang, Y. (2026). Optimally Auditing Adversarial Agents. Proceedings of the AAAI Conference on Artificial Intelligence, 40(20), 16787-16794. https://doi.org/10.1609/aaai.v40i20.38722

Issue

Section

AAAI Technical Track on Game Theory and Economic Paradigms