Online Bandit Learning for a Special Class of Non-Convex Losses

Authors

  • Lijun Zhang Nanjing University
  • Tianbao Yang The University of Iowa
  • Rong Jin Michigan State University
  • Zhi-Hua Zhou Nanjing University

DOI:

https://doi.org/10.1609/aaai.v29i1.9589

Keywords:

Online Bandit Learning, Non-convex Losses, Exploration-exploitation

Abstract

In online bandit learning, the learner aims to minimize a sequence of losses, while only observing the value of each loss at a single point. Although various algorithms and theories have been developed for online bandit learning, most of them are limited to convex losses. In this paper, we investigate the problem of online bandit learning with non-convex losses, and develop an efficient algorithm with formal theoretical guarantees. To be specific, we consider a class of losses which is a composition of a non-increasing scalar function and a linear function. This setting models a wide range of supervised learning applications such as online classification with a non-convex loss. Theoretical analysis shows that our algorithm achieves an O(poly(d)T2/3) regret bound when the variation of the loss function is small. To the best of our knowledge, this is the first work in online bandit learning that does not rely on convexity.

Downloads

Published

2015-02-21

How to Cite

Zhang, L., Yang, T., Jin, R., & Zhou, Z.-H. (2015). Online Bandit Learning for a Special Class of Non-Convex Losses. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9589

Issue

Section

Main Track: Novel Machine Learning Algorithms