TY - JOUR AU - Dai, Wei AU - Kumar, Abhimanu AU - Wei, Jinliang AU - Ho, Qirong AU - Gibson, Garth AU - Xing, Eric PY - 2015/02/09 Y2 - 2024/03/29 TI - High-Performance Distributed ML at Scale through Parameter Server Consistency Models JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 29 IS - 1 SE - AAAI Technical Track: AI and the Web DO - 10.1609/aaai.v29i1.9195 UR - https://ojs.aaai.org/index.php/AAAI/article/view/9195 SP - AB - <p> As Machine Learning (ML) applications embrace greater data size and model complexity, practitioners turn to distributed clusters to satisfy the increased computational and memory demands. Effective use of clusters for ML programs requires considerable expertise in writing distributed code, but existing highly-abstracted frameworks like Hadoop that pose low barriers to distributed-programming have not, in practice, matched the performance seen in highly specialized and advanced ML implementations. The recent Parameter Server (PS) paradigm is a middle ground between these extremes, allowing easy conversion of single-machine parallel ML programs into distributed ones, while maintaining high throughput through relaxed ``consistency models" that allow asynchronous (and, hence, inconsistent) parameter reads. However, due to insufficient theoretical study, it is not clear which of these consistency models can really ensure correct ML algorithm output; at the same time, there remain many theoretically-motivated but undiscovered opportunities to maximize computational throughput. Inspired by this challenge, we study both the theoretical guarantees and empirical behavior of iterative-convergent ML algorithms in existing PS consistency models. We then use the gleaned insights to improve a consistency model using an "eager" PS communication mechanism, and implement it as a new PS system that enables ML programs to reach their solution more quickly. </p> ER -