A Hierarchical Multi-Task Approach for Learning Embeddings from Semantic Tasks


  • Victor Sanh Hugging Face
  • Thomas Wolf Hugging Face
  • Sebastian Ruder National University of Ireland




Much effort has been devoted to evaluate whether multi-task learning can be leveraged to learn rich representations that can be used in various Natural Language Processing (NLP) down-stream applications. However, there is still a lack of understanding of the settings in which multi-task learning has a significant effect. In this work, we introduce a hierarchical model trained in a multi-task learning setup on a set of carefully selected semantic tasks. The model is trained in a hierarchical fashion to introduce an inductive bias by supervising a set of low level tasks at the bottom layers of the model and more complex tasks at the top layers of the model. This model achieves state-of-the-art results on a number of tasks, namely Named Entity Recognition, Entity Mention Detection and Relation Extraction without hand-engineered features or external NLP tools like syntactic parsers. The hierarchical training supervision induces a set of shared semantic representations at lower layers of the model. We show that as we move from the bottom to the top layers of the model, the hidden states of the layers tend to represent more complex semantic information.




How to Cite

Sanh, V., Wolf, T., & Ruder, S. (2019). A Hierarchical Multi-Task Approach for Learning Embeddings from Semantic Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6949-6956. https://doi.org/10.1609/aaai.v33i01.33016949



AAAI Technical Track: Natural Language Processing