Supervised Contrastive Few-Shot Learning for High-Frequency Time Series

Authors

  • Xi Chen Alibaba Group
  • Cheng Ge Alibaba Group
  • Ming Wang Alibaba Group
  • Jin Wang Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v37i6.25863

Keywords:

ML: Representation Learning, APP: Internet of Things, Sensor Networks & Smart Cities, ML: Applications, ML: Classification and Regression, ML: Deep Neural Architectures, ML: Time-Series/Data Streams

Abstract

Significant progress has been made in representation learning, especially with recent success on self-supervised contrastive learning. However, for time series with less intuitive or semantic meaning, sampling bias may be inevitably encountered in unsupervised approaches. Although supervised contrastive learning has shown superior performance by leveraging label information, it may also suffer from class collapse. In this study, we consider a realistic scenario in industry with limited annotation information available. A supervised contrastive framework is developed for high-frequency time series representation and classification, wherein a novel variant of supervised contrastive loss is proposed to include multiple augmentations while induce spread within each class. Experiments on four mainstream public datasets as well as a series of sensitivity and ablation analyses demonstrate that the learned representations are effective and robust compared with the direct supervised learning and self-supervised learning, notably under the minimal few-shot situation.

Downloads

Published

2023-06-26

How to Cite

Chen, X., Ge, C., Wang, M., & Wang, J. (2023). Supervised Contrastive Few-Shot Learning for High-Frequency Time Series. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7069-7077. https://doi.org/10.1609/aaai.v37i6.25863

Issue

Section

AAAI Technical Track on Machine Learning I