Online Learning and Profit Maximization from Revealed Preferences


  • Kareem Amin University of Pennsylvania
  • Rachel Cummings California Institute of Technology
  • Lili Dworkin University of Pennsylvania
  • Michael Kearns University of Pennsylvania
  • Aaron Roth University of Pennsylvania



We consider the problem of learning from revealed preferences in an online setting. In our framework, each period a consumer buys an optimal bundle of goods from a merchant according to her (linear) utility function and current prices, subject to a budget constraint. The merchant observes only the purchased goods, and seeks to adapt prices to optimize his profits. We give an efficient algorithm for the merchant's problem that consists of a learning phase in which the consumer's utility function is (perhaps partially) inferred, followed by a price optimization step. We also give an alternative online learning algorithm for the setting where prices are set exogenously, but the merchant would still like to predict the bundle that will be bought by the consumer, for purposes of inventory or supply chain management. In contrast with most prior work on the revealed preferences problem, we demonstrate that by making stronger assumptions on the form of utility functions, efficient algorithms for both learning and profit maximization are possible, even in adaptive, online settings.




How to Cite

Amin, K., Cummings, R., Dworkin, L., Kearns, M., & Roth, A. (2015). Online Learning and Profit Maximization from Revealed Preferences. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1).



AAAI Technical Track: Game Theory and Economic Paradigms