Stochastic Loss Function

Authors

  • Qingliang Liu Fudan University
  • Jinmei Lai Fudan University

DOI:

https://doi.org/10.1609/aaai.v34i04.5925

Abstract

Training deep neural networks is inherently subject to the predefined and fixed loss functions during optimizing. To improve learning efficiency, we develop Stochastic Loss Function (SLF) to dynamically and automatically generating appropriate gradients to train deep networks in the same round of back-propagation, while maintaining the completeness and differentiability of the training pipeline. In SLF, a generic loss function is formulated as a joint optimization problem of network weights and loss parameters. In order to guarantee the requisite efficiency, gradients with the respect to the generic differentiable loss are leveraged for selecting loss function and optimizing network weights. Extensive experiments on a variety of popular datasets strongly demonstrate that SLF is capable of obtaining appropriate gradients at different stages during training, and can significantly improve the performance of various deep models on real world tasks including classification, clustering, regression, neural machine translation, and objection detection.

Downloads

Published

2020-04-03

How to Cite

Liu, Q., & Lai, J. (2020). Stochastic Loss Function. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4884-4891. https://doi.org/10.1609/aaai.v34i04.5925

Issue

Section

AAAI Technical Track: Machine Learning