论文标题
在异质环境中对基础模型的分散培训
Decentralized Training of Foundation Models in Heterogeneous Environments
论文作者
论文摘要
训练基金会模型(例如GPT-3和Palm)可能非常昂贵,通常涉及数以万计的GPU连续运行数月。这些模型通常通过具有快速,均匀互连的专业群集进行培训,并使用精心设计的软件系统来支持数据并行性和模型/管道并行性。这样的专用集群可能是昂贵且难以获得的。我们可以相反,可以利用更大量的分散,异质和较低的互连计算的计算?以前的著作研究了可以纯粹以数据并行方式训练的相对较小模型的异质,分散的设置重点。模型并行基础模型培训(例如威震天)的最先进的方案仅考虑均匀的数据中心设置。在本文中,我们在异质网络上通过分散式制度中的模型并行性介绍了训练大型基础模型的首次研究。我们的主要技术贡献是一种调度算法,该算法将基础模型培训中的不同计算“任务”分配给通过缓慢的异质网络连接的一组分散的GPU设备。我们提供了正式的成本模型,并进一步提出了一种有效的进化算法,以找到最佳分配策略。我们进行了广泛的实验,这些实验代表了使用现实世界网络测量结果模拟的地理分布设备进行学习的不同方案。在最极端的情况下,在跨越3大洲的8个不同的城市中,我们的方法比以前的最新培训系统(Megatron)快4.8倍。
Training foundation models, such as GPT-3 and PaLM, can be extremely expensive, often involving tens of thousands of GPUs running continuously for months. These models are typically trained in specialized clusters featuring fast, homogeneous interconnects and using carefully designed software systems that support both data parallelism and model/pipeline parallelism. Such dedicated clusters can be costly and difficult to obtain. Can we instead leverage the much greater amount of decentralized, heterogeneous, and lower-bandwidth interconnected compute? Previous works examining the heterogeneous, decentralized setting focus on relatively small models that can be trained in a purely data parallel manner. State-of-the-art schemes for model parallel foundation model training, such as Megatron, only consider the homogeneous data center setting. In this paper, we present the first study of training large foundation models with model parallelism in a decentralized regime over a heterogeneous network. Our key technical contribution is a scheduling algorithm that allocates different computational "tasklets" in the training of foundation models to a group of decentralized GPU devices connected by a slow heterogeneous network. We provide a formal cost model and further propose an efficient evolutionary algorithm to find the optimal allocation strategy. We conduct extensive experiments that represent different scenarios for learning over geo-distributed devices simulated using real-world network measurements. In the most extreme case, across 8 different cities spanning 3 continents, our approach is 4.8X faster than prior state-of-the-art training systems (Megatron).