Similarity Search for Efficient Active Learning and Search of Rare Concepts

Authors

  • Cody Coleman Stanford University
  • Edward Chou Facebook AI
  • Julian Katz-Samuels University of Wisconsin
  • Sean Culatana Facebook AI
  • Peter Bailis Stanford University
  • Alexander C. Berg Facebook AI Research
  • Robert Nowak University of Wisconsin
  • Roshan Sumbaly Facebook AI
  • Matei Zaharia Stanford University
  • I. Zeki Yalniz Facebook AI

DOI:

https://doi.org/10.1609/aaai.v36i6.20591

Keywords:

Machine Learning (ML), Computer Vision (CV), Humans And AI (HAI)

Abstract

Many active learning and search approaches are intractable for large-scale industrial settings with billions of unlabeled examples. Existing approaches search globally for the optimal examples to label, scaling linearly or even quadratically with the unlabeled data. In this paper, we improve the computational efficiency of active learning and search methods by restricting the candidate pool for labeling to the nearest neighbors of the currently labeled set instead of scanning over all of the unlabeled data. We evaluate several selection strategies in this setting on three large-scale computer vision datasets: ImageNet, OpenImages, and a de-identified and aggregated dataset of 10 billion publicly shared images provided by a large internet company. Our approach achieved similar mAP and recall as the traditional global approach while reducing the computational cost of selection by up to three orders of magnitude, enabling web-scale active learning.

Downloads

Published

2022-06-28

How to Cite

Coleman, C., Chou, E., Katz-Samuels, J., Culatana, S., Bailis, P., Berg, A. C., Nowak, R., Sumbaly, R., Zaharia, M., & Yalniz, I. Z. (2022). Similarity Search for Efficient Active Learning and Search of Rare Concepts. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6402-6410. https://doi.org/10.1609/aaai.v36i6.20591

Issue

Section

AAAI Technical Track on Machine Learning I