Maximizing the Success Probability of Policy Allocations in Online Systems
DOI:
https://doi.org/10.1609/aaai.v38i10.28982Keywords:
ML: Optimization, APP: Web, CSO: Applications, CSO: Constraint Optimization, ML: Applications, ML: Calibration & Uncertainty Quantification, ML: Causal Learning, RU: Applications, RU: Causality, RU: Stochastic Optimization, SO: Metareasoning and Metaheuristics, SO: Non-convex OptimizationAbstract
The effectiveness of advertising in e-commerce largely depends on the ability of merchants to bid on and win impressions for their targeted users. The bidding procedure is highly complex due to various factors such as market competition, user behavior, and the diverse objectives of advertisers. In this paper we consider the problem at the level of user timelines instead of individual bid requests, manipulating full policies (i.e. pre-defined bidding strategies) and not bid values. In order to optimally allocate policies to users, typical multiple treatments allocation methods solve knapsack-like problems which aim at maximizing an expected value under constraints. In the specific context of online advertising, we argue that optimizing for the probability of success is a more suited objective than expected value maximization, and we introduce the SuccessProbaMax algorithm that aims at finding the policy allocation which is the most likely to outperform a fixed reference policy. Finally, we conduct comprehensive experiments both on synthetic and real-world data to evaluate its performance. The results demonstrate that our proposed algorithm outperforms conventional expected-value maximization algorithms in terms of success rate.Downloads
Published
2024-03-24
How to Cite
Betlei, A., Vladimirova, M., Sebbar, M., Urien, N., Rahier, T., & Heymann, B. (2024). Maximizing the Success Probability of Policy Allocations in Online Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11061-11068. https://doi.org/10.1609/aaai.v38i10.28982
Issue
Section
AAAI Technical Track on Machine Learning I