A POMDP Formulation of Proactive Learning

Authors

  • Kyle Wray University of Massachusetts Amherst
  • Shlomo Zilberstein University of Massachusetts Amherst

DOI:

https://doi.org/10.1609/aaai.v30i1.10400

Keywords:

Proactive Learning, POMDP, PBVI

Abstract

We cast the Proactive Learning (PAL) problem—Active Learning (AL) with multiple reluctant, fallible, cost-varying oracles—as a Partially Observable Markov Decision Process (POMDP). The agent selects an oracle at each time step to label a data point, while it maintains a belief over the true underlying correctness of its current dataset’s labels. The goal is to minimize labeling costs while considering the value of obtaining correct labels, thus maximizing final resultant classifier accuracy. We prove three properties that show our particular formulation leads to a structured and bounded-size set of belief points, enabling strong performance of point-based methods to solve the POMDP. Our method is compared with the original three algorithms proposed by Donmez and Carbonell and a simple baseline. We demonstrate that our approach matches or improves upon the original approach within five different oracle scenarios, each on two datasets. Finally, our algorithm provides a general, well-defined mathematical foundation to build upon.

Downloads

Published

2016-03-05

How to Cite

Wray, K., & Zilberstein, S. (2016). A POMDP Formulation of Proactive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10400

Issue

Section

Technical Papers: Planning and Scheduling