TY - JOUR AU - Fan, Yang AU - Xia, Yingce AU - Wu, Lijun AU - Xie, Shufang AU - Liu, Weiqing AU - Bian, Jiang AU - Qin, Tao AU - Li, Xiang-Yang PY - 2021/05/18 Y2 - 2024/03/28 TI - Learning to Reweight with Deep Interactions JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 8 SE - AAAI Technical Track on Machine Learning I DO - 10.1609/aaai.v35i8.16906 UR - https://ojs.aaai.org/index.php/AAAI/article/view/16906 SP - 7385-7393 AB - Recently the concept of teaching has been introduced into machine learning, in which a teacher model is used to guide the training of a student model (which will be used in real tasks) through data selection, loss function design, etc. Learning to reweight, which is a specific kind of teaching that reweights training data using a teacher model, receives much attention due to its simplicity and effectiveness. In existing learning to reweight works, the teacher model only utilizes shallow/surface information such as training iteration number and loss/accuracy of the student model from training/validation sets, but ignores the internal states of the student model, which limits the potential of learning to reweight. In this work, we propose an improved data reweighting algorithm, in which the student model provides its internal states to the teacher model, and the teacher model returns adaptive weights of training samples to enhance the training of the student model. The teacher model is jointly trained with the student model using meta gradients propagated from a validation set. Experiments on image classification with clean/noisy labels and neural machine translation empirically demonstrate that our algorithm makes significant improvement over previous methods. ER -