Comparison Lift: Bandit-based Experimentation System for Online Advertising

Authors

  • Tong Geng JD.com
  • Xiliang Lin JD.com
  • Harikesh S. Nair Stanford University JD.com
  • Jun Hao JD.com
  • Bin Xiang JD.com
  • Shurui Fan JD.com

DOI:

https://doi.org/10.1609/aaai.v35i17.17775

Keywords:

Computational Advertising, Digital Marketing, Contextual Bandits, Thompson Sampling

Abstract

Comparison Lift is an experimentation-as-a-service (EaaS) application for testing online advertising audiences and creatives at JD.com. Unlike many other EaaS tools that focus primarily on fixed sample A/B testing, Comparison Lift deploys a custom bandit-based experimentation algorithm. The advantages of the bandit-based approach are two-fold. First, it aligns the randomization induced in the test with the advertiser’s goals from testing. Second, by adapting experimental design to information acquired during the test, it reduces substantially the cost of experimentation to the advertiser. Since launch in May 2019, Comparison Lift has been utilized in over 1,500 experiments. We estimate that utilization of the product has helped increase click-through rates of participating advertising campaigns by 46% on average. We estimate that the adaptive design in the product has generated 27% more clicks on average during testing compared to a fixed sample A/B design. Both suggest significant value generation and cost savings to advertisers from the product.

Downloads

Published

2021-05-18

How to Cite

Geng, T., Lin, X., Nair, H. S., Hao, J., Xiang, B., & Fan, S. (2021). Comparison Lift: Bandit-based Experimentation System for Online Advertising. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15117-15126. https://doi.org/10.1609/aaai.v35i17.17775

Issue

Section

IAAI Technical Track on Highly Innovative Applications of AI