TY - JOUR AU - Yang, Chenxiao AU - Pan, Junwei AU - Gao, Xiaofeng AU - Jiang, Tingyu AU - Liu, Dapeng AU - Chen, Guihai PY - 2022/06/28 Y2 - 2024/03/29 TI - Cross-Task Knowledge Distillation in Multi-Task Recommendation JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 4 SE - AAAI Technical Track on Data Mining and Knowledge Management DO - 10.1609/aaai.v36i4.20352 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20352 SP - 4318-4326 AB - Multi-task learning (MTL) has been widely used in recommender systems, wherein predicting each type of user feedback on items (e.g, click, purchase) are treated as individual tasks and jointly trained with a unified model. Our key observation is that the prediction results of each task may contain task-specific knowledge about user’s fine-grained preference towards items. While such knowledge could be transferred to benefit other tasks, it is being overlooked under the current MTL paradigm. This paper, instead, proposes a Cross-Task Knowledge Distillation framework that attempts to leverage prediction results of one task as supervised signals to teach another task. However, integrating MTL and KD in a proper manner is non-trivial due to several challenges including task conflicts, inconsistent magnitude and requirement of synchronous optimization. As countermeasures, we 1) introduce auxiliary tasks with quadruplet loss functions to capture cross-task fine-grained ranking information and avoid task conflicts, 2) design a calibrated distillation approach to align and distill knowledge from auxiliary tasks, and 3) propose a novel error correction mechanism to enable and facilitate synchronous training of teacher and student models. Comprehensive experiments are conducted to verify the effectiveness of our framework in real-world datasets. ER -