TY - JOUR AU - Akpinar, Nil-Jana AU - Kratzwald, Bernhard AU - Feuerriegel, Stefan PY - 2020/04/03 Y2 - 2024/03/29 TI - Sample Complexity Bounds for RNNs with Application to Combinatorial Graph Problems (Student Abstract) JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 10 SE - Student Abstract Track DO - 10.1609/aaai.v34i10.7144 UR - https://ojs.aaai.org/index.php/AAAI/article/view/7144 SP - 13745-13746 AB - <p>Learning to predict solutions to real-valued combinatorial graph problems promises efficient approximations. As demonstrated based on the NP-hard edge clique cover number, recurrent neural networks (RNNs) are particularly suited for this task and can even outperform state-of-the-art heuristics. However, the theoretical framework for estimating real-valued RNNs is understood only poorly. As our primary contribution, this is the first work that upper bounds the sample complexity for learning real-valued RNNs. While such derivations have been made earlier for feed-forward and convolutional neural networks, our work presents the first such attempt for recurrent neural networks. Given a single-layer RNN with <em>a</em> rectified linear units and input of length <em>b</em>, we show that a population prediction error of ε can be realized with at most <em>Õ</em>(<em>a</em><sup>4</sup><em>b</em>/<em>ε</em><sup>2</sup>) samples.<sup>1</sup> We further derive comparable results for multi-layer RNNs. Accordingly, a size-adaptive RNN fed with graphs of at most <em>n</em> vertices can be learned in <em>Õ</em>(<em>n</em><sup>6</sup>/<em>ε</em><sup>2</sup>), i.,e., with only a polynomial number of samples. For combinatorial graph problems, this provides a theoretical foundation that renders RNNs competitive.</p> ER -