Comparison Lift: Bandit-based Experimentation System for Online Advertising


  • Tong Geng
  • Xiliang Lin
  • Harikesh S. Nair Stanford University
  • Jun Hao
  • Bin Xiang
  • Shurui Fan



Computational Advertising, Digital Marketing, Contextual Bandits, Thompson Sampling


Comparison Lift is an experimentation-as-a-service (EaaS) application for testing online advertising audiences and creatives at Unlike many other EaaS tools that focus primarily on fixed sample A/B testing, Comparison Lift deploys a custom bandit-based experimentation algorithm. The advantages of the bandit-based approach are two-fold. First, it aligns the randomization induced in the test with the advertiser’s goals from testing. Second, by adapting experimental design to information acquired during the test, it reduces substantially the cost of experimentation to the advertiser. Since launch in May 2019, Comparison Lift has been utilized in over 1,500 experiments. We estimate that utilization of the product has helped increase click-through rates of participating advertising campaigns by 46% on average. We estimate that the adaptive design in the product has generated 27% more clicks on average during testing compared to a fixed sample A/B design. Both suggest significant value generation and cost savings to advertisers from the product.




How to Cite

Geng, T., Lin, X., Nair, H. S., Hao, J., Xiang, B., & Fan, S. (2021). Comparison Lift: Bandit-based Experimentation System for Online Advertising. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15117-15126.



IAAI Technical Track on Highly Innovative Applications of AI