Tricking the Hashing Trick: A Tight Lower Bound on the Robustness of CountSketch to Adaptive Inputs

Authors

  • Edith Cohen Google Research Tel Aviv University
  • Jelani Nelson UC Berkeley Google Research
  • Tamas Sarlos Google Research
  • Uri Stemmer Tel Aviv University Google Research

DOI:

https://doi.org/10.1609/aaai.v37i6.25882

Keywords:

ML: Dimensionality Reduction/Feature Selection, DMKM: Data Stream Mining, ML: Adversarial Learning & Robustness

Abstract

CountSketch and Feature Hashing (the ``hashing trick'') are popular randomized dimensionality reduction methods that support recovery of l2 -heavy hitters and approximate inner products. When the inputs are not adaptive (do not depend on prior outputs), classic estimators applied to a sketch of size O(l / epsilon) are accurate for a number of queries that is exponential in l. When inputs are adaptive, however, an adversarial input can be constructed after O(l) queries with the classic estimator and the best known robust estimator only supports ~O(l^2) queries. In this work we show that this quadratic dependence is in a sense inherent: We design an attack that after O(l^2) queries produces an adversarial input vector whose sketch is highly biased. Our attack uses ``natural'' non-adaptive inputs (only the final adversarial input is chosen adaptively) and universally applies with any correct estimator, including one that is unknown to the attacker. In that, we expose inherent vulnerability of this fundamental method.

Downloads

Published

2023-06-26

How to Cite

Cohen, E., Nelson, J., Sarlos, T., & Stemmer, U. (2023). Tricking the Hashing Trick: A Tight Lower Bound on the Robustness of CountSketch to Adaptive Inputs. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7235-7243. https://doi.org/10.1609/aaai.v37i6.25882

Issue

Section

AAAI Technical Track on Machine Learning I