Information Transfer in Multitask Learning, Data Augmentation, and Beyond
Keywords:New Faculty Highlights
AbstractA hallmark of human intelligence is that we continue to learn new information and then extrapolate the learned information onto new tasks and domains (see, e.g., Thrun and Pratt (1998)). While this is a fairly intuitive observation, formulating such ideas has proved to be a challenging research problem and continues to inspire new studies. Recently, there has been increasing interest in AI/ML about building models that generalize across tasks, even when they have some form of distribution shifts. How can we ground this research in a solid framework to develop principled methods for better practice? This talk will present my recent works addressing this research question. My talk will involve three parts: revisiting multitask learning from the lens of deep learning theory, designing principled methods for robust transfer, and algorithmic implications for data augmentation.
How to Cite
Zhang, H. R. (2023). Information Transfer in Multitask Learning, Data Augmentation, and Beyond. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15464-15464. https://doi.org/10.1609/aaai.v37i13.26831
New Faculty Highlights