MTP: Exploring Multimodal Urban Traffic Profiling with Modality Augmentation and Spectrum Fusion

Authors

  • Haolong Xiang Nanjing University of Information Science and Technology State Key Lab. for Novel Software Technology, Nanjing University
  • Peisi Wang Nanjing University of Information Science and Technology
  • Xiaolong Xu Nanjing University of Information Science and Technology
  • Kun Yi State Information Center of China
  • Xuyun Zhang Macquarie University
  • Quan Z. Sheng Macquarie University
  • Amin Beheshti Macquarie University
  • Wei Fan University of Auckland

DOI:

https://doi.org/10.1609/aaai.v40i32.39917

Abstract

With rapid urbanization in the modern era, traffic signals from various sensors have been playing a significant role in monitoring the states of cities, which provides a strong foundation in ensuring safe travel, reducing traffic congestion and optimizing urban mobility. Most existing methods for traffic time series modeling often rely on the original data modality, i.e., numerical direct readings from the sensors in cities. However, this unimodal approach overlooks the semantic information existing in multimodal heterogeneous urban data in different perspectives, which hinders a comprehensive understanding of traffic signals and limits the accurate prediction of complex traffic dynamics. To address this problem, we propose a novel Multimodal framework, MTP, for urban Traffic Profiling, which learns multimodal features through numeric, visual, and textual perspectives in the frequency domain. The three branches drive a multimodal perspective of traffic signal learning for augmentation, while the frequency learning strategies delicately refine the information for extraction. Specifically, we first conduct the visual augmentation for the traffic time series, which transforms the original modality into periodicity images and frequency images for visual learning. Also, we augment descriptive texts for the traffic time series based on the specific topic, background information and item description for textual learning. To complement the numeric information, we utilize frequency multilayer perceptrons for learning on the original modality. We design a hierarchical contrastive learning on the three branches to fuse the three modalities. Finally, extensive experiments on six real-world datasets demonstrate superior performance compared with the state-of-the-art approaches.

Downloads

Published

2026-03-14

How to Cite

Xiang, H., Wang, P., Xu, X., Yi, K., Zhang, X., Sheng, Q. Z., … Fan, W. (2026). MTP: Exploring Multimodal Urban Traffic Profiling with Modality Augmentation and Spectrum Fusion. Proceedings of the AAAI Conference on Artificial Intelligence, 40(32), 27037–27045. https://doi.org/10.1609/aaai.v40i32.39917

Issue

Section

AAAI Technical Track on Machine Learning IX