论文标题

Stablemoe:专家混合的稳定路由策略

StableMoE: Stable Routing Strategy for Mixture of Experts

论文作者

Dai, Damai, Dong, Li, Ma, Shuming, Zheng, Bo, Sui, Zhifang, Chang, Baobao, Wei, Furu

论文摘要

Experts(MOE)技术的混合物可以通过负担得起的计算开销来扩展变压器的模型大小。我们指出,现有的学习对路线的MOE方法遭受路由波动问题的困扰,即,同一输入的目标专家可能会随培训而变化,但是在推理过程中,只有一名专家会激活输入。路由波动往往会损害样品效率,因为相同的输入更新不同的专家,但最终仅使用了一个。在本文中,我们提出了两个培训阶段的Stablemoe,以解决路由波动问题。在第一个训练阶段,我们学习了平衡且具有凝聚力的路由策略,并将其提炼成与骨干模型脱钩的轻型路由器。在第二个训练阶段,我们利用蒸馏路由器来确定令牌到外的分配,并将其冻结以制定稳定的路由策略。我们验证了语言建模和多语言机器翻译的方法。结果表明,在收敛速度和性能方面,StableMoe优于现有的MOE方法。

The Mixture-of-Experts (MoE) technique can scale up the model size of Transformers with an affordable computational overhead. We point out that existing learning-to-route MoE methods suffer from the routing fluctuation issue, i.e., the target expert of the same input may change along with training, but only one expert will be activated for the input during inference. The routing fluctuation tends to harm sample efficiency because the same input updates different experts but only one is finally used. In this paper, we propose StableMoE with two training stages to address the routing fluctuation problem. In the first training stage, we learn a balanced and cohesive routing strategy and distill it into a lightweight router decoupled from the backbone model. In the second training stage, we utilize the distilled router to determine the token-to-expert assignment and freeze it for a stable routing strategy. We validate our method on language modeling and multilingual machine translation. The results show that StableMoE outperforms existing MoE methods in terms of both convergence speed and performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源