Muon is Scalable for LLM Training

Jingyuan Liu, Jianlin Su, Xingcheng Yao, Zhejun Jiang, Guokun Lai, Yulun Du, Yiming Qin, Wei-Xin Xu, Enzhe Lu, Junjie Yan, Yanru Chen, Huabin Zheng, Yibo Liu, Shaowei Liu, Benfeng Yin, W. He, Han Zhu, Yuzhi Wang, Jianzhou Wang, Mengjiao Dong, Zheng Zhang, Kang Yongsheng, Hao‐Li Zhang, Xinran Xu, Yutao Zhang, Yuxin Wu, X. H. Zhou, Zhilin Yang
2025
2 references

Abstract

Recently, the Muon optimizer based on matrix orthogonalization has demonstrated strong results in training small-scale language models, but the scalability to larger models has not been proven. We identify two crucial techniques for scaling up Muon: (1) adding weight decay and (2) carefully adjusting the per-parameter update scale. These techniques allow Muon to work out-of-the-box on large-scale training without the need of hyper-parameter tuning. Scaling law experiments indicate that Muon achieves $\sim\!2\times$ computational efficiency compared to AdamW with compute optimal training. Based on these improvements, we introduce Moonlight, a 3B/16B-parameter Mixture-of-Expert (MoE) model trained with 5.7T tokens using Muon. Our model improves the current Pareto frontier, achieving better performance with much fewer training FLOPs compared to prior models. We open-source our distributed Muon implementation that is memory optimal and communication efficient. We also release the pretrained, instruction-tuned, and intermediate checkpoints to support future research.

1 repository
1 reference

Code References

pytorch/pytorch
1 file
torch/optim/_muon.py
1
https://arxiv.org/pdf/2502.16982
Link copied to clipboard!