TY - JOUR AU - Deng, Hanming AU - Hua, Yang AU - Song, Tao AU - Xue, Zhengui AU - Ma, Ruhui AU - Robertson, Neil AU - Guan, Haibing PY - 2020/04/03 Y2 - 2024/03/29 TI - Reinforcing Neural Network Stability with Attractor Dynamics JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 04 SE - AAAI Technical Track: Machine Learning DO - 10.1609/aaai.v34i04.5787 UR - https://ojs.aaai.org/index.php/AAAI/article/view/5787 SP - 3765-3772 AB - <p>Recent approaches interpret deep neural works (DNNs) as dynamical systems, drawing the connection between stability in forward propagation and generalization of DNNs. In this paper, we take a step further to be the first to reinforce this stability of DNNs without changing their original structure and verify the impact of the reinforced stability on the network representation from various aspects. More specifically, we reinforce stability by modeling attractor dynamics of a DNN and propose <em>relu-max attractor network</em> (RMAN), a light-weight module readily to be deployed on state-of-the-art ResNet-like networks. RMAN is only needed during training so as to modify a ResNet's attractor dynamics by minimizing an energy function together with the loss of the original learning task. Through intensive experiments, we show that RMAN-modified attractor dynamics bring a more structured representation space to ResNet and its variants, and more importantly improve the generalization ability of ResNet-like networks in supervised tasks due to reinforced stability.</p> ER -