DeToNATION: Decoupled Torch Network-Aware Training on Interlinked Online Nodes

Authors

  • Mogens Henrik From University of Southern Denmark - SDU
  • Jacob Nielsen University of Southern Denmark - SDU
  • Lukas Galke University of Southern Denmark - SDU
  • Peter Schneider-Kamp University of Southern Denmark - SDU

DOI:

https://doi.org/10.1609/aaai.v40i25.39256

Abstract

Training large neural network models requires extensive computational resources, often distributed across several nodes and accelerators. Recent findings suggest that it may be sufficient to only exchange the fast moving components of the gradients, while accumulating momentum locally (Decoupled Momentum, or DeMo). However, DeMo assumes that models fit on a single accelerator. We relax this assumption and introduce FlexDeMo, whereby nodes fully shard model parameters locally between different accelerators, while inter-node communication is reduced by synchronizing only fast-moving components instead of the full gradients -- resulting in a hybrid sharded data parallel training strategy. We further introduce a framework, denoted as DeToNATION, that generalizes DeMo, FlexDeMo, and other popular distributed training schemes such as DiLoCo -- introducing new variations of replication schemes and challenging choices made in DeMo. Our results across language and vision domains show that FlexDeMo attains similar validation loss as hybrid sharded data parallel training employing AdamW and full gradient synchronization, while being substantially faster. FlexDeMo is thus a promising distributed training scheme for the largest machine learning models.

Downloads

Published

2026-03-14

How to Cite

From, M. H., Nielsen, J., Galke, L., & Schneider-Kamp, P. (2026). DeToNATION: Decoupled Torch Network-Aware Training on Interlinked Online Nodes. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 21128-21135. https://doi.org/10.1609/aaai.v40i25.39256

Issue

Section

AAAI Technical Track on Machine Learning II