Skip to content

MoE autotune print a lot failed kernel on SM120 #2077

@weireweire

Description

@weireweire

I just updated the version to include #1925 and tested deepseek R1 nvfp4 PP8 SM120 in vllm, and I see a lot print like:

(Worker_PP0 pid=38385) 2025-11-11 06:41:27,319 - WARNING - autotuner.py:485 - flashinfer.jit: [Autotuner]: Skipping tactic <flashinfer.fused_moe.core.get_cutlass_fused_moe_module.<locals>.MoERunner object at 0x7c70ccce8350> 15, due to failure while profiling: [TensorRT-LLM][ERROR] Assertion failed: Failed to initialize cutlass TMA WS grouped gemm. Error: Error Internal (/workspace/flashinfer/csrc/nv_internal/tensorrt_llm/cutlass_instantiations/120/gemm_grouped/120/cutlass_kernel_file_gemm_grouped_sm120_M256_BS_group0.generated.cu:60)
(Worker_PP0 pid=38385) 1       0x7c6dd0a58b16 tensorrt_llm::common::throwRuntimeError(char const*, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 97
(Worker_PP0 pid=38385) 2       0x7c6dd11ca0ad /home/weiliangl/.cache/flashinfer/0.5.2/120a/cached_ops/fused_moe_120/fused_moe_120.so(+0x9080ad) [0x7c6dd11ca0ad]
(Worker_PP0 pid=38385) 3       0x7c6dd0c453d8 void tensorrt_llm::kernels::cutlass_kernels_oss::dispatchMoeGemmSelectClusterShapeTmaWarpSpecialized<cutlass::arch::Sm120, __nv_fp4_e2m1, __nv_fp4_e2m1, __nv_bfloat16, tensorrt_llm::cutlass_extensions::EpilogueOpDefault, (tensorrt_llm::kernels::cutlass_kernels::TmaWarpSpecializedGroupedGemmInput::EpilogueFusion)3, cute::tuple<cute::C<256>, cute::C<128>, cute::C<128> > >(tensorrt_llm::kernels::cutlass_kernels::TmaWarpSpecializedGroupedGemmInput, int, tensorrt_llm::cutlass_extensions::CutlassGemmConfig, int, CUstream_st*, int*, unsigned long*) + 168
(Worker_PP0 pid=38385) 4       0x7c6dd0c4608f void tensorrt_llm::kernels::cutlass_kernels_oss::dispatchMoeGemmSelectTileShapeTmaWarpSpecialized<__nv_fp4_e2m1, __nv_fp4_e2m1, __nv_bfloat16, tensorrt_llm::cutlass_extensions::EpilogueOpDefault, (tensorrt_llm::kernels::cutlass_kernels::TmaWarpSpecializedGroupedGemmInput::EpilogueFusion)3>(tensorrt_llm::kernels::cutlass_kernels::TmaWarpSpecializedGroupedGemmInput, int, tensorrt_llm::cutlass_extensions::CutlassGemmConfig, int, CUstream_st*, int*, unsigned long*) + 2271
(Worker_PP0 pid=38385) 5       0x7c6dd0c3462b void tensorrt_llm::kernels::cutlass_kernels::MoeGemmRunner<__nv_fp4_e2m1, __nv_fp4_e2m1, __nv_bfloat16, __nv_bfloat16>::dispatchToArch<tensorrt_llm::cutlass_extensions::EpilogueOpDefault>(tensorrt_llm::kernels::cutlass_kernels::GroupedGemmInput<__nv_fp4_e2m1, __nv_fp4_e2m1, __nv_bfloat16, __nv_bfloat16>, tensorrt_llm::kernels::cutlass_kernels::TmaWarpSpecializedGroupedGemmInput) + 331
(Worker_PP0 pid=38385) 6       0x7c6dd1102d2f tensorrt_llm::kernels::cutlass_kernels::CutlassMoeFCRunner<__nv_fp4_e2m1, __nv_fp4_e2m1, __nv_bfloat16, __nv_bfloat16, __nv_bfloat16, void>::gemm2(tensorrt_llm::kernels::cutlass_kernels::MoeGemmRunner<__nv_fp4_e2m1, __nv_fp4_e2m1, __nv_bfloat16, __nv_bfloat16>&, tensorrt_llm::kernels::fp8_blockscale_gemm::CutlassFp8BlockScaleGemmRunnerInterface*, __nv_fp4_e2m1 const*, void*, __nv_bfloat16*, long const*, tensorrt_llm::kernels::cutlass_kernels::TmaWarpSpecializedGroupedGemmInput, __nv_fp4_e2m1 const*, __nv_bfloat16 const*, __nv_bfloat16 const*, float const*, unsigned char const*, tensorrt_llm::kernels::cutlass_kernels::QuantParams, float const*, float const*, int const*, int const*, int const*, long const*, long, long, long, long, long, int, long, float const**, bool, void*, CUstream_st*, tensorrt_llm::kernels::cutlass_kernels::MOEParallelismConfig, bool, tensorrt_llm::cutlass_extensions::CutlassGemmConfig, bool, int*, int*, bool) + 1183
(Worker_PP0 pid=38385) 7       0x7c6dd110343b tensorrt_llm::kernels::cutlass_kernels::CutlassMoeFCRunner<__nv_fp4_e2m1, __nv_fp4_e2m1, __nv_bfloat16, __nv_bfloat16, __nv_bfloat16, void>::gemm2(void const*, void*, void*, long const*, tensorrt_llm::kernels::cutlass_kernels::TmaWarpSpecializedGroupedGemmInput, void const*, void const*, void const*, float const*, unsigned char const*, tensorrt_llm::kernels::cutlass_kernels::QuantParams, float const*, float const*, int const*, int const*, int const*, long const*, long, long, long, long, long, int, long, float const**, bool, void*, bool, CUstream_st*, tensorrt_llm::kernels::cutlass_kernels::MOEParallelismConfig, bool, tensorrt_llm::cutlass_extensions::CutlassGemmConfig, bool, int*, int*, bool) + 411
(Worker_PP0 pid=38385) 8       0x7c6dd10754df tensorrt_llm::kernels::cutlass_kernels::GemmProfilerBackend::runProfiler(int, tensorrt_llm::cutlass_extensions::CutlassGemmConfig const&, char*, void const*, bool, CUstream_st* const&) + 2735
(Worker_PP0 pid=38385) 9       0x7c6dd1000cb5 /home/weiliangl/.cache/flashinfer/0.5.2/120a/cached_ops/fused_moe_120/fused_moe_120.so(+0x73ecb5) [0x7c6dd1000cb5]
(Worker_PP0 pid=38385) 10      0x7c6dd103ce96 /home/weiliangl/.cache/flashinfer/0.5.2/120a/cached_ops/fused_moe_120/fused_moe_120.so(+0x77ae96) [0x7c6dd103ce96]
(Worker_PP0 pid=38385) 11      0x7c6dd0ffcdf3 tvm::ffi::details::FunctionObjImpl<tvm::ffi::Function::FromTyped<FusedMoeRunner::GetFunction(tvm::ffi::String const&)::{lambda(tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::Optional<tvm::ffi::TensorView, void>, tvm::ffi::TensorView, tvm::ffi::Optional<tvm::ffi::TensorView, void>, long, long, long, long, long, long, long, bool, bool, long, long, bool, bool, long)#1}>(FusedMoeRunner::GetFunction(tvm::ffi::String const&)::{lambda(tvm::ffi::TensorView, tvm::ffi::TensorView, tvm::ffi::Optional<tvm::ffi::TensorView, void>, tvm::ffi::TensorView, tvm::ffi::Optional<tvm::ffi::TensorView, void>, long, long, long, long, long, long, long, bool, bool, long, long, bool, bool, long)#1})::{lambda(tvm::ffi::AnyView const*, int, tvm::ffi::Any*)#1}>::SafeCall(void*, TVMFFIAny const*, int, TVMFFIAny*) + 931
(Worker_PP0 pid=38385) 12      0x7c70d8850b94 /workspace/.venv/lib/python3.12/site-packages/tvm_ffi/core.abi3.so(+0x55b94) [0x7c70d8850b94]
(Worker_PP0 pid=38385) 13            0x549185 _PyObject_MakeTpCall + 117
(Worker_PP0 pid=38385) 14            0x5d73c9 _PyEval_EvalFrameDefault + 2697
(Worker_PP0 pid=38385) 15            0x54cd94 VLLM::Worker_PP0() [0x54cd94]
(Worker_PP0 pid=38385) 16            0x54b3b5 PyObject_Call + 277
(Worker_PP0 pid=38385) 17            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 18            0x54aa9a _PyObject_Call_Prepend + 394
(Worker_PP0 pid=38385) 19            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 20            0x54b30c PyObject_Call + 108
(Worker_PP0 pid=38385) 21            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 22            0x54cd94 VLLM::Worker_PP0() [0x54cd94]
(Worker_PP0 pid=38385) 23            0x54b3b5 PyObject_Call + 277
(Worker_PP0 pid=38385) 24            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 25            0x54cd94 VLLM::Worker_PP0() [0x54cd94]
(Worker_PP0 pid=38385) 26            0x54b3b5 PyObject_Call + 277
(Worker_PP0 pid=38385) 27            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 28            0x54cd94 VLLM::Worker_PP0() [0x54cd94]
(Worker_PP0 pid=38385) 29            0x54b3b5 PyObject_Call + 277
(Worker_PP0 pid=38385) 30            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 31            0x54aa9a _PyObject_Call_Prepend + 394
(Worker_PP0 pid=38385) 32            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 33            0x54924e _PyObject_MakeTpCall + 318
(Worker_PP0 pid=38385) 34            0x5d73c9 _PyEval_EvalFrameDefault + 2697
(Worker_PP0 pid=38385) 35      0x7c7273482e27 /workspace/.venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so(+0x974e27) [0x7c7273482e27]
(Worker_PP0 pid=38385) 36      0x7c72737c0ddb /workspace/.venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so(+0xcb2ddb) [0x7c72737c0ddb]
(Worker_PP0 pid=38385) 37      0x7c7248892fb4 /workspace/.venv/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so(+0x5ce9fb4) [0x7c7248892fb4]
(Worker_PP0 pid=38385) 38      0x7c727353ff74 /workspace/.venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so(+0xa31f74) [0x7c727353ff74]
(Worker_PP0 pid=38385) 39      0x7c7273540388 /workspace/.venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so(+0xa32388) [0x7c7273540388]
(Worker_PP0 pid=38385) 40      0x7c7273433e73 /workspace/.venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so(+0x925e73) [0x7c7273433e73]
(Worker_PP0 pid=38385) 41      0x7c7272ece72e /workspace/.venv/lib/python3.12/site-packages/torch/lib/libtorch_python.so(+0x3c072e) [0x7c7272ece72e]
(Worker_PP0 pid=38385) 42            0x58208f VLLM::Worker_PP0() [0x58208f]
(Worker_PP0 pid=38385) 43            0x54b30c PyObject_Call + 108
(Worker_PP0 pid=38385) 44            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 45            0x54a9d2 _PyObject_Call_Prepend + 194
(Worker_PP0 pid=38385) 46            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 47            0x549185 _PyObject_MakeTpCall + 117
(Worker_PP0 pid=38385) 48            0x5d73c9 _PyEval_EvalFrameDefault + 2697
(Worker_PP0 pid=38385) 49            0x54a9d2 _PyObject_Call_Prepend + 194
(Worker_PP0 pid=38385) 50            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 51            0x549185 _PyObject_MakeTpCall + 117
(Worker_PP0 pid=38385) 52            0x5d73c9 _PyEval_EvalFrameDefault + 2697
(Worker_PP0 pid=38385) 53            0x54a9d2 _PyObject_Call_Prepend + 194
(Worker_PP0 pid=38385) 54            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 55            0x54b30c PyObject_Call + 108
(Worker_PP0 pid=38385) 56            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 57            0x54a9d2 _PyObject_Call_Prepend + 194
(Worker_PP0 pid=38385) 58            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 59            0x54b30c PyObject_Call + 108
(Worker_PP0 pid=38385) 60            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 61            0x54a9d2 _PyObject_Call_Prepend + 194
(Worker_PP0 pid=38385) 62            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 63            0x549185 _PyObject_MakeTpCall + 117
(Worker_PP0 pid=38385) 64            0x5d73c9 _PyEval_EvalFrameDefault + 2697
(Worker_PP0 pid=38385) 65            0x54cccd VLLM::Worker_PP0() [0x54cccd]
(Worker_PP0 pid=38385) 66            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 67            0x54cccd VLLM::Worker_PP0() [0x54cccd]
(Worker_PP0 pid=38385) 68            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 69            0x54cccd VLLM::Worker_PP0() [0x54cccd]
(Worker_PP0 pid=38385) 70            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 71            0x54a9d2 _PyObject_Call_Prepend + 194
(Worker_PP0 pid=38385) 72            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 73            0x54b30c PyObject_Call + 108
(Worker_PP0 pid=38385) 74            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 75            0x54a9d2 _PyObject_Call_Prepend + 194
(Worker_PP0 pid=38385) 76            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 77            0x54b30c PyObject_Call + 108
(Worker_PP0 pid=38385) 78            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 79            0x54a9d2 _PyObject_Call_Prepend + 194
(Worker_PP0 pid=38385) 80            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 81            0x54b30c PyObject_Call + 108
(Worker_PP0 pid=38385) 82            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 83            0x54cccd VLLM::Worker_PP0() [0x54cccd]
(Worker_PP0 pid=38385) 84            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 85            0x54a9d2 _PyObject_Call_Prepend + 194
(Worker_PP0 pid=38385) 86            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 87            0x549185 _PyObject_MakeTpCall + 117
(Worker_PP0 pid=38385) 88            0x5d73c9 _PyEval_EvalFrameDefault + 2697
(Worker_PP0 pid=38385) 89            0x54cd94 VLLM::Worker_PP0() [0x54cd94]
(Worker_PP0 pid=38385) 90            0x54b3b5 PyObject_Call + 277
(Worker_PP0 pid=38385) 91            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 92            0x54cd94 VLLM::Worker_PP0() [0x54cd94]
(Worker_PP0 pid=38385) 93            0x54b3b5 PyObject_Call + 277
(Worker_PP0 pid=38385) 94            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 95            0x54aa9a _PyObject_Call_Prepend + 394
(Worker_PP0 pid=38385) 96            0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 97            0x54b30c PyObject_Call + 108
(Worker_PP0 pid=38385) 98            0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 99            0x54aa9a _PyObject_Call_Prepend + 394
(Worker_PP0 pid=38385) 100           0x5a3628 VLLM::Worker_PP0() [0x5a3628]
(Worker_PP0 pid=38385) 101           0x54b30c PyObject_Call + 108
(Worker_PP0 pid=38385) 102           0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 103           0x54cd32 VLLM::Worker_PP0() [0x54cd32]
(Worker_PP0 pid=38385) 104           0x5db55b _PyEval_EvalFrameDefault + 19483
(Worker_PP0 pid=38385) 105           0x5d58eb PyEval_EvalCode + 347
(Worker_PP0 pid=38385) 106           0x608a23 PyRun_StringFlags + 211
(Worker_PP0 pid=38385) 107           0x6b3e9e PyRun_SimpleStringFlags + 62
(Worker_PP0 pid=38385) 108           0x6bcb61 Py_RunMain + 1153
(Worker_PP0 pid=38385) 109           0x6bc57d Py_BytesMain + 45
(Worker_PP0 pid=38385) 110     0x7c7286ffb1ca /usr/lib/x86_64-linux-gnu/libc.so.6(+0x2a1ca) [0x7c7286ffb1ca]
(Worker_PP0 pid=38385) 111     0x7c7286ffb28b __libc_start_main + 139
(Worker_PP0 pid=38385) 112           0x657ce5 _start + 37

It doesn't prevent running but I didn't see it before, and it will print a lot backtrace.

The input shape of MoE is [16000, 7168]

my vllm command is:

vllm serve nvidia/DeepSeek-R1-0528-FP4-v2 --trust-remote-code --host 0.0.0.0 --port 8000 --pipeline-parallel-size 8 --tensor-parallel-size 1 --max-num-seqs 32 --max-cudagraph-capture-size 32 --max-model-len 4010 --max-num-batched-tokens 16000 --enable-chunked-prefill --kv-cache-dtype auto --gpu-memory-utilization 0.85 --no-enable-prefix-caching

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions