[Feature] Expert parallelism support for MoE models#96
Open
NikitosKh wants to merge 3 commits intosgl-project:mainfrom
Open
[Feature] Expert parallelism support for MoE models#96NikitosKh wants to merge 3 commits intosgl-project:mainfrom
NikitosKh wants to merge 3 commits intosgl-project:mainfrom
Conversation
cb7ca2f to
6a5329a
Compare
6a5329a to
001b824
Compare
Collaborator
|
Could you help resolve the conflict? Thanks for the PR |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This adds expert parallelism (EP) for MoE models. The approach follows SGLang's EP design — dispatch tokens to the rank that owns the target expert via all-to-all, compute locally using the existing fused MoE kernels from #59, then combine results back.
Depends on #93 for the streaming weight loader (which also handles EP expert partitioning).
How it works
Instead of TP-sharding every expert's intermediate dimension, EP gives each rank
num_experts / ep_sizecomplete experts. The forward pass for each MoE layer then does:all_to_all_singlefused_experts_impllocally on received tokensNo new kernels — steps 1 and 3 reuse
fused_topkandfused_experts_impldirectly.A few things worth calling out:
What changed
1 new file, 9 modified, 1 doc update:
moe/ep.pydistributed/info.pydistributed/__init__.pydistributed/impl.pyengine/config.pyep_sizefieldengine/engine.pylayers/moe.pymodels/weight.pymoe/__init__.pyserver/args.py--ep-sizeflagdocs/features.mdConstraints
ep_sizemust equaltp_sizeor 1 (they share the same NCCL world group)num_experts % ep_size == 0Testing
Ran on Qwen3-30B-A3B with
--tp 4 --ep-size 4on 4×H200:Usage
python -m minisgl --model "Qwen/Qwen3-30B-A3B" --tp 4 --ep-size 4