Robust Model Compression Using Deep Hypotheses

Authors

  • Omri Armstrong Tel Aviv University
  • Ran Gilad-Bachrach Tel-Aviv University

DOI:

https://doi.org/10.1609/aaai.v35i8.16827

Keywords:

Learning on the Edge & Model Compression, Learning Theory

Abstract

Machine Learning models should ideally be compact and robust. Compactness provides efficiency and comprehensibility whereas robustness provides stability. Both topics have been studied in recent years but in isolation. Here we present a robust model compression scheme which is independent of model types: it can compress ensembles, neural networks and other types of models into diverse types of small models. The main building block is the notion of depth derived from robust statistics. Originally, depth was introduced as a measure of the centrality of a point in a sample such that the median is the deepest point. This concept was extended to classification functions which makes it possible to define the depth of a hypothesis and the median hypothesis. Algorithms have been suggested to approximate the median but they have been limited to binary classification. In this study, we present a new algorithm, the Multiclass Empirical Median Optimization (MEMO) algorithm that finds a deep hypothesis in multi-class tasks, and prove its correctness. This led to our Compact Robust Estimated Median Belief Optimization (CREMBO) algorithm for robust model compression. We demonstrate the success of this algorithm empirically by compressing neural networks and random forests into small decision trees, which are interpretable models, and show that they are more accurate and robust than other comparable methods. In addition, our empirical study shows that our method outperforms Knowledge Distillation on DNN to DNN compression.

Downloads

Published

2021-05-18

How to Cite

Armstrong, O., & Gilad-Bachrach, R. (2021). Robust Model Compression Using Deep Hypotheses. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6688-6695. https://doi.org/10.1609/aaai.v35i8.16827

Issue

Section

AAAI Technical Track on Machine Learning I