Urban Region Embedding via Multi-View Contrastive Prediction
DOI:
https://doi.org/10.1609/aaai.v38i8.28718Keywords:
DMKM: Mining of Spatial, Temporal or Spatio-Temporal DataAbstract
Recently, learning urban region representations utilizing multi-modal data (information views) has become increasingly popular, for deep understanding of the distributions of various socioeconomic features in cities. However, previous methods usually blend multi-view information in a posteriors stage, falling short in learning coherent and consistent representations across different views. In this paper, we form a new pipeline to learn consistent representations across varying views, and propose the multi-view Contrastive Prediction model for urban Region embedding (ReCP), which leverages the multiple information views from point-of-interest (POI) and human mobility data. Specifically, ReCP comprises two major modules, namely an intra-view learning module utilizing contrastive learning and feature reconstruction to capture the unique information from each single view, and inter-view learning module that perceives the consistency between the two views using a contrastive prediction learning scheme. We conduct thorough experiments on two downstream tasks to assess the proposed model, i.e., land use clustering and region popularity prediction. The experimental results demonstrate that our model outperforms state-of-the-art baseline methods significantly in urban region representation learning.Downloads
Published
2024-03-24
How to Cite
Li, Z., Huang, W., Zhao, K., Yang, M., Gong, Y., & Chen, M. (2024). Urban Region Embedding via Multi-View Contrastive Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 38(8), 8724-8732. https://doi.org/10.1609/aaai.v38i8.28718
Issue
Section
AAAI Technical Track on Data Mining & Knowledge Management