Self-Paced Learning: An Implicit Regularization Perspective

Authors

  • Yanbo Fan Institute of Automation, Chinese Academy of Sciences
  • Ran He Institute of Automation, Chinese Academy of Sciences
  • Jian Liang Institute of Automation, Chinese Academy of Sciences
  • Baogang Hu Institute of Automation, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v31i1.10809

Keywords:

self-paced learning, implicit regularizer, half-quadratic optimization

Abstract

Self-paced learning (SPL) mimics the cognitive mechanism of humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by the minimizer function. Existing methods usually pursue this by artificially designing the explicit form of SPL regularizer. In this paper, we study a group of new regularizer (named self-paced implicit regularizer) that is deduced from robust loss function. Based on the convex conjugacy theory, the minimizer function for self-paced implicit regularizer can be directly learned from the latent loss function, while the analytic form of the regularizer can be even unknown. A general framework (named SPL-IR) for SPL is developed accordingly. We demonstrate that the learning procedure of SPL-IR is associated with latent robust loss functions, thus can provide some theoretical insights for its working mechanism. We further analyze the relation between SPL-IR and half-quadratic optimization and provide a group of self-paced implicit regularizer. Finally, we implement SPL-IR to both supervised and unsupervised tasks, and experimental results corroborate our ideas and demonstrate the correctness and effectiveness of implicit regularizers.

Downloads

Published

2017-02-13

How to Cite

Fan, Y., He, R., Liang, J., & Hu, B. (2017). Self-Paced Learning: An Implicit Regularization Perspective. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10809