Yet Another Traffic Classifier: A Masked Autoencoder Based Traffic Transformer with Multi-Level Flow Representation

Authors

  • Ruijie Zhao Shanghai Jiao Tong University
  • Mingwei Zhan Shanghai Jiao Tong University
  • Xianwen Deng Shanghai Jiao Tong University
  • Yanhao Wang QI-ANXIN
  • Yijun Wang Shanghai Jiao Tong University
  • Guan Gui Nanjing University of Posts and Telecommunications
  • Zhi Xue Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v37i4.25674

Keywords:

APP: Security, APP: Internet of Things, Sensor Networks & Smart Cities, APP: Web

Abstract

Traffic classification is a critical task in network security and management. Recent research has demonstrated the effectiveness of the deep learning-based traffic classification method. However, the following limitations remain: (1) the traffic representation is simply generated from raw packet bytes, resulting in the absence of important information; (2) the model structure of directly applying deep learning algorithms does not take traffic characteristics into account; and (3) scenario-specific classifier training usually requires a labor-intensive and time-consuming process to label data. In this paper, we introduce a masked autoencoder (MAE) based traffic transformer with multi-level flow representation to tackle these problems. To model raw traffic data, we design a formatted traffic representation matrix with hierarchical flow information. After that, we develop an efficient Traffic Transformer, in which packet-level and flow-level attention mechanisms implement more efficient feature extraction with lower complexity. At last, we utilize the MAE paradigm to pre-train our classifier with a large amount of unlabeled data, and perform fine-tuning with a few labeled data for a series of traffic classification tasks. Experiment findings reveal that our method outperforms state-of-the-art methods on five real-world traffic datasets by a large margin. The code is available at https://github.com/NSSL-SJTU/YaTC.

Downloads

Published

2023-06-26

How to Cite

Zhao, R., Zhan, M., Deng, X., Wang, Y., Wang, Y., Gui, G., & Xue, Z. (2023). Yet Another Traffic Classifier: A Masked Autoencoder Based Traffic Transformer with Multi-Level Flow Representation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 5420-5427. https://doi.org/10.1609/aaai.v37i4.25674

Issue

Section

AAAI Technical Track on Domain(s) of Application