Information Transfer in Multitask Learning, Data Augmentation, and Beyond


  • Hongyang R. Zhang Northeastern University, Boston, MA



New Faculty Highlights


A hallmark of human intelligence is that we continue to learn new information and then extrapolate the learned information onto new tasks and domains (see, e.g., Thrun and Pratt (1998)). While this is a fairly intuitive observation, formulating such ideas has proved to be a challenging research problem and continues to inspire new studies. Recently, there has been increasing interest in AI/ML about building models that generalize across tasks, even when they have some form of distribution shifts. How can we ground this research in a solid framework to develop principled methods for better practice? This talk will present my recent works addressing this research question. My talk will involve three parts: revisiting multitask learning from the lens of deep learning theory, designing principled methods for robust transfer, and algorithmic implications for data augmentation.




How to Cite

Zhang, H. R. (2023). Information Transfer in Multitask Learning, Data Augmentation, and Beyond. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15464-15464.