Towards Reusable Network Components by Learning Compatible Representations

Authors

  • Michael Gygli Google Research
  • Jasper Uijlings Google Research
  • Vittorio Ferrari Google Research

Keywords:

Representation Learning

Abstract

This paper proposes to make a first step towards compatible and hence reusable network components. Rather than training networks for different tasks independently, we adapt the training process to produce network components that are compatible across tasks. In particular, we split a network into two components, a features extractor and a target task head, and propose various approaches to accomplish compatibility between them. We systematically analyse these approaches on the task of image classification on standard datasets. We demonstrate that we can produce components which are directly compatible without any fine-tuning or compromising accuracy on the original tasks. Afterwards, we demonstrate the use of compatible components on three applications: Unsupervised domain adaptation, transferring classifiers across feature extractors with different architectures, and increasing the computational efficiency of transfer learning.

Downloads

Published

2021-05-18

How to Cite

Gygli, M., Uijlings, J., & Ferrari, V. (2021). Towards Reusable Network Components by Learning Compatible Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7620-7629. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16932

Issue

Section

AAAI Technical Track on Machine Learning II