Neural TSP Solver with Progressive Distillation


  • Dongxiang Zhang Zhejiang University
  • Ziyang Xiao Zhejiang University
  • Yuan Wang Singapore University of Social Sciences
  • Mingli Song Zhejiang University
  • Gang Chen Zhejiang University



PRS: Planning/Scheduling and Learning, ML: Reinforcement Learning Algorithms


Travelling salesman problem (TSP) is NP-Hard with exponential search space. Recently, the adoption of encoder-decoder models as neural TSP solvers has emerged as an attractive topic because they can instantly obtain near-optimal results for small-scale instances. Nevertheless, their training efficiency and solution quality degrade dramatically when dealing with large-scale problems. To address the issue, we propose a novel progressive distillation framework, by adopting curriculum learning to train TSP samples in increasing order of their problem size and progressively distilling high-level knowledge from small models to large models via a distillation loss. In other words, the trained small models are used as the teacher network to guide action selection when training large models. To accelerate training speed, we also propose a Delaunary-graph based action mask and a new attention-based decoder to reduce decoding cost. Experimental results show that our approach establishes clear advantages over existing encoder-decoder models in terms of training effectiveness and solution quality. In addition, we validate its usefulness as an initial solution generator for the state-of-the-art TSP solvers, whose probability of obtaining the optimal solution can be further improved in such a hybrid manner.




How to Cite

Zhang, D., Xiao, Z., Wang, Y., Song, M., & Chen, G. (2023). Neural TSP Solver with Progressive Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 12147-12154.



AAAI Technical Track on Planning, Routing, and Scheduling