Trimming the Fat: Redundancy-Aware Acceleration Framework for DGNNs
DOI:
https://doi.org/10.1609/aaai.v40i26.39354Abstract
Temporal graphs are essential for modeling complex real-world systems, such as social interactions, financial transactions, and recommendation systems, but the high computational cost and model complexity of dynamic graph neural networks (DGNNs) pose significant challenges for practical deployment. Although various pruning and sampling techniques have proven effective in accelerating static GNNs, they fall short in dynamic settings due to temporal dependencies in evolving graph structures. To address these challenges, we propose TrimDG, a general framework that accelerates DGNNs by eliminating both static and runtime redundancies. For static redundancy, we introduce a novel node influence metric, Temporal Personalized PageRank (TPP), to prune less informative nodes, and employ temporal binning to remove redundant events. For runtime redundancy during training, we develop an adaptive sampling strategy guided by graph information bottleneck and further reduce sampling frequency through temporal batch selector and sampling cache. Theoretical analysis supports our design, and experiments on real-world datasets show that TrimDG reduces runtime by an average of 83.49% across diverse DGNN backbones, while maintaining strong predictive performance, demonstrating both its efficiency and generalizability.Downloads
Published
2026-03-14
How to Cite
Huang, R., Cao, Y., Li, Y., Hu, J., Xiong, Z., Fang, S., … Yang, Y. (2026). Trimming the Fat: Redundancy-Aware Acceleration Framework for DGNNs. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 22003–22011. https://doi.org/10.1609/aaai.v40i26.39354
Issue
Section
AAAI Technical Track on Machine Learning III