Cross-Task Knowledge Distillation in Multi-Task Recommendation
Keywords:Data Mining & Knowledge Management (DMKM)
AbstractMulti-task learning (MTL) has been widely used in recommender systems, wherein predicting each type of user feedback on items (e.g, click, purchase) are treated as individual tasks and jointly trained with a unified model. Our key observation is that the prediction results of each task may contain task-specific knowledge about user’s fine-grained preference towards items. While such knowledge could be transferred to benefit other tasks, it is being overlooked under the current MTL paradigm. This paper, instead, proposes a Cross-Task Knowledge Distillation framework that attempts to leverage prediction results of one task as supervised signals to teach another task. However, integrating MTL and KD in a proper manner is non-trivial due to several challenges including task conflicts, inconsistent magnitude and requirement of synchronous optimization. As countermeasures, we 1) introduce auxiliary tasks with quadruplet loss functions to capture cross-task fine-grained ranking information and avoid task conflicts, 2) design a calibrated distillation approach to align and distill knowledge from auxiliary tasks, and 3) propose a novel error correction mechanism to enable and facilitate synchronous training of teacher and student models. Comprehensive experiments are conducted to verify the effectiveness of our framework in real-world datasets.
How to Cite
Yang, C., Pan, J., Gao, X., Jiang, T., Liu, D., & Chen, G. (2022). Cross-Task Knowledge Distillation in Multi-Task Recommendation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 4318-4326. https://doi.org/10.1609/aaai.v36i4.20352
AAAI Technical Track on Data Mining and Knowledge Management