TODTLER: Two-Order-Deep Transfer Learning

Authors

  • Jan Van Haaren KU Leuven
  • Andrey Kolobov Microsoft Research
  • Jesse Davis KU Leuven

DOI:

https://doi.org/10.1609/aaai.v29i1.9624

Keywords:

Transfer learning

Abstract

The traditional way of obtaining models from data, inductive learning, has proved itself both in theory and in many practical applications. However, in domains where data is difficult or expensive to obtain, e.g., medicine, deep transfer learning is a more promising technique. It circumvents the model acquisition difficulties caused by scarce data in a target domain by carrying over structural properties of a model learned in a source domain where training data is ample. Nonetheless, the lack of a principled view of transfer learning so far has limited its adoption. In this paper, we address this issue by regarding transfer learning as a process that biases learning in a target domain in favor of patterns useful in a source domain. Specifically, we consider a first-order logic model of the data as an instantiation of a set of second-order templates. Hence, the usefulness of a model is partly determined by the learner's prior distribution over these template sets. The main insight of our work is that transferring knowledge amounts to acquiring a posterior over the second-order template sets by learning in the source domain and using this posterior when learning in the target setting. Our experimental evaluation demonstrates our approach to outperform the existing transfer learning techniques in terms of accuracy and runtime.

Downloads

Published

2015-02-21

How to Cite

Van Haaren, J., Kolobov, A., & Davis, J. (2015). TODTLER: Two-Order-Deep Transfer Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9624

Issue

Section

Main Track: Novel Machine Learning Algorithms