Hypothesis Disparity Regularized Mutual Information Maximization

Authors

  • Qicheng Lao West China Biomedical Big Data Center, West China Hospital of Sichuan University Montreal Institute for Learning Algorithms (MILA), Université de Montréal Imagia
  • Xiang Jiang Imagia
  • Mohammad Havaei Imagia

Keywords:

Transfer/Adaptation/Multi-task/Meta/Automated Learning, Unsupervised & Self-Supervised Learning, Ensemble Methods

Abstract

We propose a hypothesis disparity regularized mutual information maximization (HDMI) approach to tackle unsupervised hypothesis transfer---as an effort towards unifying hypothesis transfer learning (HTL) and unsupervised domain adaptation (UDA)---where the knowledge from a source domain is transferred solely through hypotheses and adapted to the target domain in an unsupervised manner. In contrast to the prevalent HTL and UDA approaches that typically use a single hypothesis, HDMI employs multiple hypotheses to leverage the underlying distributions of the source and target hypotheses. To better utilize the crucial relationship among different hypotheses---as opposed to unconstrained optimization of each hypothesis independently---while adapting to the unlabeled target domain through mutual information maximization, HDMI incorporates a hypothesis disparity regularization that coordinates the target hypotheses jointly learn better target representations while preserving more transferable source knowledge with better-calibrated prediction uncertainty. HDMI achieves state-of-the-art adaptation performance on benchmark datasets for UDA in the context of HTL, without the need to access the source data during the adaptation.

Downloads

Published

2021-05-18

How to Cite

Lao, Q., Jiang, X., & Havaei, M. (2021). Hypothesis Disparity Regularized Mutual Information Maximization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8243-8251. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17003

Issue

Section

AAAI Technical Track on Machine Learning II