Multi-Task Active Learning with Output Constraints

Authors

  • Yi Zhang Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v24i1.7698

Abstract

Many problems in information extraction, text mining, natural language processing and other fields exhibit the same property: multiple prediction tasks are related in the sense that their outputs (labels) satisfy certain constraints. In this paper, we propose an active learning framework exploiting such relations among tasks. Intuitively, with task outputs coupled by constraints, active learning can utilize not only the uncertainty of the prediction in a single task but also the inconsistency of predictions across tasks. We formalize this idea as a cross-task value of information criteria, in which the reward of a labeling assignment is propagated and measured over all relevant tasks reachable through constraints. A specific example of our framework leads to the cross entropy measure on the predictions of coupled tasks, which generalizes the entropy in the classical single-task uncertain sampling. We conduct experiments on two real-world problems: web information extraction and document classification. Empirical results demonstrate the effectiveness of our framework in actively collecting labeled examples for multiple related tasks.

Downloads

Published

2010-07-03

How to Cite

Zhang, Y. (2010). Multi-Task Active Learning with Output Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1), 667-672. https://doi.org/10.1609/aaai.v24i1.7698