Learning Sparse Sharing Architectures for Multiple Tasks

Authors

  • Tianxiang Sun Fudan University
  • Yunfan Shao Fudan University
  • Xiaonan Li Xidian University
  • Pengfei Liu Fudan University
  • Hang Yan Fudan University
  • Xipeng Qiu Fudan University
  • Xuanjing Huang Fudan University

DOI:

https://doi.org/10.1609/aaai.v34i05.6424

Abstract

Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, hierarchical sharing, and soft sharing. How choosing a suitable sharing mechanism depends on the relations among the tasks, which is not easy since it is difficult to understand the underlying shared factors among these tasks. In this paper, we propose a novel parameter sharing mechanism, named Sparse Sharing. Given multiple tasks, our approach automatically finds a sparse sharing structure. We start with an over-parameterized base network, from which each task extracts a subnetwork. The subnetworks of multiple tasks are partially overlapped and trained in parallel. We show that both hard sharing and hierarchical sharing can be formulated as particular instances of the sparse sharing framework. We conduct extensive experiments on three sequence labeling tasks. Compared with single-task models and three typical multi-task learning baselines, our proposed approach achieves consistent improvement while requiring fewer parameters.

Downloads

Published

2020-04-03

How to Cite

Sun, T., Shao, Y., Li, X., Liu, P., Yan, H., Qiu, X., & Huang, X. (2020). Learning Sparse Sharing Architectures for Multiple Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8936-8943. https://doi.org/10.1609/aaai.v34i05.6424

Issue

Section

AAAI Technical Track: Natural Language Processing