Robust Temporal Smoothness in Multi-Task Learning

Authors

  • Menghui Zhou Department of Software, Yunnan University
  • Yu Zhang Department of Computer Science, Sheffield University
  • Yun Yang Department of Software, Yunnan University
  • Tong Liu Department of Computer Science, Sheffield University
  • Po Yang Department of Computer Science, Sheffield University

DOI:

https://doi.org/10.1609/aaai.v37i9.26351

Keywords:

ML: Transfer, Domain Adaptation, Multi-Task Learning, DMKM: Data Stream Mining, DMKM: Mining of Spatial, Temporal or Spatio-Temporal Data, ML: Optimization, ML: Transparent, Interpretable, Explainable ML

Abstract

Multi-task learning models based on temporal smoothness assumption, in which each time point of a sequence of time points concerns a task of prediction, assume the adjacent tasks are similar to each other. However, the effect of outliers is not taken into account. In this paper, we show that even only one outlier task will destroy the performance of the entire model. To solve this problem, we propose two Robust Temporal Smoothness (RoTS) frameworks. Compared with the existing models based on temporal relation, our methods not only chase the temporal smoothness information but identify outlier tasks, however, without increasing the computational complexity. Detailed theoretical analyses are presented to evaluate the performance of our methods. Experimental results on synthetic and real-life datasets demonstrate the effectiveness of our frameworks. We also discuss several potential specific applications and extensions of our RoTS frameworks.

Downloads

Published

2023-06-26

How to Cite

Zhou, M., Zhang, Y., Yang, Y., Liu, T., & Yang, P. (2023). Robust Temporal Smoothness in Multi-Task Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11426-11434. https://doi.org/10.1609/aaai.v37i9.26351

Issue

Section

AAAI Technical Track on Machine Learning IV