Boosting Complementary Hash Tables for Fast Nearest Neighbor Search

Authors

  • Xianglong Liu Beihang University
  • Cheng Deng Xidian University
  • Yadong Mu Peking University
  • Zhujin Li Beihang University

DOI:

https://doi.org/10.1609/aaai.v31i1.11204

Keywords:

Nearest Neighbor Search, Boosting, Multiple Indexing, Complementary Hash Tables, Locality Sensitive Hashing

Abstract

Hashing has been proven a promising technique for fast nearest neighbor search over massive databases. In many practical tasks it usually builds multiple hash tables for a desired level of recall performance. However, existing multi-table hashing methods suffer from the heavy table redundancy, without strong table complementarity and effective hash code learning. To address the problem, this paper proposes a multi-table learning method which pursues a specified number of complementary and informative hash tables from a perspective of ensemble learning. By regarding each hash table as a neighbor prediction model, the multi-table search procedure boils down to a linear assembly of predictions stemming from multiple tables. Therefore, a sequential updating and learning framework is naturally established in a boosting mechanism, theoretically guaranteeing the table complementarity and algorithmic convergence. Furthermore, each boosting round pursues the discriminative hash functions for each table by a discrete optimization in the binary code space. Extensive experiments carried out on two popular tasks including Euclidean and semantic nearest neighbor search demonstrate that the proposed boosted complementary hash-tables method enjoys the strong table complementarity and significantly outperforms the state-of-the-arts.

Downloads

Published

2017-02-12

How to Cite

Liu, X., Deng, C., Mu, Y., & Li, Z. (2017). Boosting Complementary Hash Tables for Fast Nearest Neighbor Search. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11204