CloudLSTM: A Recurrent Neural Model for Spatiotemporal Point-cloud Stream Forecasting


  • Chaoyun Zhang Tencent Lightspeed & Quantum Studios
  • Marco Fiore IMDEA Networks
  • Iain Murray University of Edinburgh
  • Paul Patras University of Edinburgh



Time-Series/Data Streams


This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources. We design a Dynamic Point-cloud Convolution (DConv) operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and extracts local spatial features from sets of neighboring points that surround different elements of the input. This operator maintains the permutation invariance of sequence-to-sequence learning frameworks, while representing neighboring correlations at each time step -- an important aspect in spatiotemporal predictive learning. The DConv operator resolves the grid-structural data requirements of existing spatiotemporal forecasting models and can be easily plugged into traditional LSTM architectures with sequence-to-sequence learning and attention mechanisms. We apply our proposed architecture to two representative, practical use cases that involve point-cloud streams, i.e. mobile service traffic forecasting and air quality indicator forecasting. Our results, obtained with real-world datasets collected in diverse scenarios for each use case, show that CloudLSTM delivers accurate long-term predictions, outperforming a variety of competitor neural network models.




How to Cite

Zhang, C., Fiore, M., Murray, I., & Patras, P. (2021). CloudLSTM: A Recurrent Neural Model for Spatiotemporal Point-cloud Stream Forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10851-10858.



AAAI Technical Track on Machine Learning V