TY - JOUR
AU - Li, Jia
AU - Fang, Cong
AU - Lin, Zhouchen
PY - 2019/07/17
Y2 - 2024/07/12
TI - Lifted Proximal Operator Machines
JF - Proceedings of the AAAI Conference on Artificial Intelligence
JA - AAAI
VL - 33
IS - 01
SE - AAAI Technical Track: Machine Learning
DO - 10.1609/aaai.v33i01.33014181
UR - https://ojs.aaai.org/index.php/AAAI/article/view/4323
SP - 4181-4188
AB - <p>We propose a new optimization method for training feedforward neural networks. By rewriting the activation function as an equivalent proximal operator, we approximate a feedforward neural network by adding the proximal operators to the objective function as penalties, hence we call the lifted proximal operator machine (LPOM). LPOM is <em>block multiconvex</em> in all layer-wise weights and activations. This allows us to use block coordinate descent to update the layer-wise weights and activations. Most notably, we only use the mapping of the activation function <em>itself</em>, rather than its derivative, thus avoiding the gradient vanishing or blow-up issues in gradient based training methods. So our method is applicable to various non-decreasing Lipschitz continuous activation functions, which <em>can be saturating and non-differentiable</em>. LPOM does not require more auxiliary variables than the layer-wise activations, thus using roughly the same amount of memory as stochastic gradient descent (SGD) does. Its parameter tuning is also much simpler. We further prove the convergence of updating the layer-wise weights and activations and point out that the optimization could be made parallel by asynchronous update. Experiments on MNIST and CIFAR-10 datasets testify to the advantages of LPOM.</p>
ER -