Improved Bandits in Many-to-One Matching Markets with Incentive Compatibility
DOI:
https://doi.org/10.1609/aaai.v38i12.29226Keywords:
ML: Online Learning & BanditsAbstract
Two-sided matching markets have been widely studied in the literature due to their rich applications. Since participants are usually uncertain about their preferences, online algorithms have recently been adopted to learn them through iterative interactions. An existing work initiates the study of this problem in a many-to-one setting with responsiveness. However, their results are far from optimal and lack guarantees of incentive compatibility. We first extend an existing algorithm for the one-to-one setting to this more general setting and show it achieves a near-optimal bound for player-optimal regret. Nevertheless, due to the substantial requirement for collaboration, a single player's deviation could lead to a huge increase in its own cumulative rewards and a linear regret for others. In this paper, we aim to enhance the regret bound in many-to-one markets while ensuring incentive compatibility. We first propose the adaptively explore-then-deferred-acceptance (AETDA) algorithm for responsiveness setting and derive an upper bound for player-optimal stable regret while demonstrating its guarantee of incentive compatibility. This result is a significant improvement over existing works. And to the best of our knowledge, it constitutes the first player-optimal guarantee in matching markets that offers such robust assurances. We also consider broader substitutable preferences, one of the most general conditions to ensure the existence of a stable matching and cover responsiveness. We devise an online DA (ODA) algorithm and establish an upper bound for the player-pessimal stable regret for this setting.Downloads
Published
2024-03-24
How to Cite
Kong, F., & Li, S. (2024). Improved Bandits in Many-to-One Matching Markets with Incentive Compatibility. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13256-13264. https://doi.org/10.1609/aaai.v38i12.29226
Issue
Section
AAAI Technical Track on Machine Learning III