Trafformer: Unify Time and Space in Traffic Prediction
Keywords:ML: Graph-based Machine Learning, DMKM: Graph Mining, Social Network Analysis & Community Mining, DMKM: Mining of Spatial, Temporal or Spatio-Temporal Data, APP: Transportation
AbstractTraffic prediction is an important component of the intelligent transportation system. Existing deep learning methods encode temporal information and spatial information separately or iteratively. However, the spatial and temporal information is highly correlated in a traffic network, so existing methods may not learn the complex spatial-temporal dependencies hidden in the traffic network due to the decomposed model design. To overcome this limitation, we propose a new model named Trafformer, which unifies spatial and temporal information in one transformer-style model. Trafformer enables every node at every timestamp interact with every other node in every other timestamp in just one step in the spatial-temporal correlation matrix. This design enables Trafformer to catch complex spatial-temporal dependencies. Following the same design principle, we use the generative style decoder to predict multiple timestamps in only one forward operation instead of the iterative style decoder in Transformer. Furthermore, to reduce the complexity brought about by the huge spatial-temporal self-attention matrix, we also propose two variants of Trafformer to further improve the training and inference speed without losing much effectivity. Extensive experiments on two traffic datasets demonstrate that Trafformer outperforms existing methods and provides a promising future direction for the spatial-temporal traffic prediction problem.
How to Cite
Jin, D., Shi, J., Wang, R., Li, Y., Huang, Y., & Yang, Y.-B. (2023). Trafformer: Unify Time and Space in Traffic Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8114-8122. https://doi.org/10.1609/aaai.v37i7.25980
AAAI Technical Track on Machine Learning II