Cellular Network Traffic Scheduling With Deep Reinforcement Learning


  • Sandeep Chinchali Stanford University
  • Pan Hu Stanford University
  • Tianshu Chu Uhana, Inc.
  • Manu Sharma Uhana, Inc.
  • Manu Bansal Uhana, Inc.
  • Rakesh Misra Uhana, Inc.
  • Marco Pavone Stanford University
  • Sachin Katti Stanford University




Reinforcement Learning, Time-series/Data Streams


Modern mobile networks are facing unprecedented growth in demand due to a new class of traffic from Internet of Things (IoT) devices such as smart wearables and autonomous cars. Future networks must schedule delay-tolerant software updates, data backup, and other transfers from IoT devices while maintaining strict service guarantees for conventional real-time applications such as voice-calling and video. This problem is extremely challenging because conventional traffic is highly dynamic across space and time, so its performance is significantly impacted if all IoT traffic is scheduled immediately when it originates. In this paper, we present a reinforcement learning (RL) based scheduler that can dynamically adapt to traffic variation, and to various reward functions set by network operators, to optimally schedule IoT traffic. Using 4 weeks of real network data from downtown Melbourne, Australia spanning diverse traffic patterns, we demonstrate that our RL scheduler can enable mobile networks to carry 14.7% more data with minimal impact on existing traffic, and outpeforms heuristic schedulers by more than 2x. Our work is a valuable step towards designing autonomous, "self-driving" networks that learn to manage themselves from past data.




How to Cite

Chinchali, S., Hu, P., Chu, T., Sharma, M., Bansal, M., Misra, R., Pavone, M., & Katti, S. (2018). Cellular Network Traffic Scheduling With Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11339



Computational Sustainability and Artificial Intelligence