Maximum Roaming Multi-Task Learning
Keywords:Transfer/Adaptation/Multi-task/Meta/Automated Learning, (Deep) Neural Network Algorithms
AbstractMulti-task learning has gained popularity due to the advantages it provides with respect to resource usage and performance. Nonetheless, the joint optimization of parameters with respect to multiple tasks remains an active research topic. Sub-partitioning the parameters between different tasks has proven to be an efficient way to relax the optimization constraints over the shared weights, may the partitions be disjoint or overlapping. However, one drawback of this approach is that it can weaken the inductive bias generally set up by the joint task optimization. In this work, we present a novel way to partition the parameter space without weakening the inductive bias. Specifically, we propose Maximum Roaming, a method inspired by dropout that randomly varies the parameter partitioning, while forcing them to visit as many tasks as possible at a regulated frequency, so that the network fully adapts to each update. We study the properties of our method through experiments on a variety of visual multi-task data sets. Experimental results suggest that the regularization brought by roaming has more impact on performance than usual partitioning optimization strategies. The overall method is flexible, easily applicable, provides superior regularization and consistently achieves improved performances compared to recent multi-task learning formulations.
How to Cite
Pascal, L., Michiardi, P., Bost, X., Huet, B., & Zuluaga, M. A. (2021). Maximum Roaming Multi-Task Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 9331-9341. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17125
AAAI Technical Track on Machine Learning III