All-in Text: Learning Document, Label, and Word Representations Jointly

Authors

  • Jinseok Nam Technische Universität Darmstadt
  • Eneldo Loza Mencía Technische Universität Darmstadt
  • Johannes Fürnkranz Technische Universität Darmstadt

DOI:

https://doi.org/10.1609/aaai.v30i1.10241

Abstract

Conventional multi-label classification algorithms treat the target labels of the classification task as mere symbols that are void of an inherent semantics. However, in many cases textual descriptions of these labels are available or can be easily constructed from public document sources such as Wikipedia. In this paper, we investigate an approach for embedding documents and labels into a joint space while sharing word representations between documents and labels. For finding such embeddings, we rely on the text of documents as well as descriptions for the labels. The use of such label descriptions not only lets us expect an increased performance on conventional multi-label text classification tasks, but can also be used to make predictions for labels that have not been seen during the training phase. The potential of our method is demonstrated on the multi-label classification task of assigning keywords from the Medical Subject Headings (MeSH) to publications in biomedical research, both in a conventional and in a zero-shot learning setting.

Downloads

Published

2016-02-21

How to Cite

Nam, J., Loza Mencía, E., & Fürnkranz, J. (2016). All-in Text: Learning Document, Label, and Word Representations Jointly. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10241

Issue

Section

Technical Papers: Machine Learning Methods