AutoGraph: Optimizing DNN Computation Graph for Parallel GPU Kernel Execution

Authors

  • Yuxuan Zhao The Chinese University of Hong Kong
  • Qi Sun The Chinese University of Hong Kong
  • Zhuolun He The Chinese University of Hong Kong
  • Yang Bai The Chinese University of Hong Kong SmartMore
  • Bei Yu The Chinese University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v37i9.26343

Keywords:

ML: Scalability of ML Systems, APP: Design

Abstract

Deep learning frameworks optimize the computation graphs and intra-operator computations to boost the inference performance on GPUs, while inter-operator parallelism is usually ignored. In this paper, a unified framework, AutoGraph, is proposed to obtain highly optimized computation graphs in favor of parallel executions of GPU kernels. A novel dynamic programming algorithm, combined with backtracking search, is adopted to explore the optimal graph optimization solution, with the fast performance estimation from the mixed critical path cost. Accurate runtime information based on GPU Multi-Stream launched with CUDA Graph is utilized to determine the convergence of the optimization. Experimental results demonstrate that our method achieves up to 3.47x speedup over existing graph optimization methods. Moreover, AutoGraph outperforms state-of-the-art parallel kernel launch frameworks by up to 1.26x.

Downloads

Published

2023-06-26

How to Cite

Zhao, Y., Sun, Q., He, Z., Bai, Y., & Yu, B. (2023). AutoGraph: Optimizing DNN Computation Graph for Parallel GPU Kernel Execution. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11354-11362. https://doi.org/10.1609/aaai.v37i9.26343

Issue

Section

AAAI Technical Track on Machine Learning IV