MoEC: Mixture of Expert Clusters

Authors

  • Yuan Xie Microsoft Research Asia
  • Shaohan Huang Microsoft Research Asia
  • Tianyu Chen Microsoft Research Asia
  • Furu Wei Microsoft Research Asia

DOI:

https://doi.org/10.1609/aaai.v37i11.26617

Keywords:

SNLP: Machine Translation & Multilinguality, SNLP: Learning & Optimization for SNLP

Abstract

Sparsely Mixture of Experts (MoE) has received great interest due to its promising scaling capability with affordable computational overhead. MoE models convert dense layers into sparse experts, and utilize a gated routing network to make experts conditionally activated. However, as the number of experts grows, MoE with outrageous parameters suffers from overfitting and sparse data allocation. Such problems are especially severe on tasks with limited data, thus hindering the progress towards improving performance by scaling up. We verify that there exists a performance upper bound of scaling up sparse MoE. In this work, we propose Mixture of Expert Clusters — a general approach to enable expert layers to learn more diverse and appropriate knowledge by imposing variance-based constraints on the routing stage. Given this, we could further propose a cluster-level expert dropout strategy specifically designed for the expert cluster structure. Our experiments reveal that MoEC could improve performance on machine translation and natural language understanding tasks. MoEC plays a positive role in mitigating overfitting and sparse data allocation problems, thus fully releasing the potential of large-scale sparse models.

Downloads

Published

2023-06-26

How to Cite

Xie, Y., Huang, S., Chen, T., & Wei, F. (2023). MoEC: Mixture of Expert Clusters. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13807-13815. https://doi.org/10.1609/aaai.v37i11.26617

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing