TY - JOUR AU - Wang, Zifeng AU - Zhu, Hong AU - Dong, Zhenhua AU - He, Xiuqiang AU - Huang, Shao-Lun PY - 2020/04/03 Y2 - 2024/03/29 TI - Less Is Better: Unweighted Data Subsampling via Influence Function JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 04 SE - AAAI Technical Track: Machine Learning DO - 10.1609/aaai.v34i04.6103 UR - https://ojs.aaai.org/index.php/AAAI/article/view/6103 SP - 6340-6347 AB - <p>In the time of <em>Big Data</em>, training complex models on large-scale data sets is challenging, making it appealing to reduce data volume for saving computation resources by subsampling. Most previous works in subsampling are weighted methods designed to help the performance of subset-model approach the full-set-model, hence the weighted methods have no chance to acquire a subset-model that is better than the full-set-model. However, we question that <em>how can we achieve better model with less data?</em> In this work, we propose a novel Unweighted Influence Data Subsampling (UIDS) method, and prove that the subset-model acquired through our method can outperform the full-set-model. Besides, we show that overly confident on a given test set for sampling is common in Influence-based subsampling methods, which can eventually cause our subset-model's failure in out-of-sample test. To mitigate it, we develop a probabilistic sampling scheme to control the <em>worst-case risk</em> over all distributions close to the empirical distribution. The experiment results demonstrate our methods superiority over existed subsampling methods in diverse tasks, such as text classification, image classification, click-through prediction, etc.</p> ER -