Skip to content

[Bug] failed to run tuning_fused_moe_triton.py #4991

Closed
@inkhare

Description

@inkhare

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

python /data00/models/tuning_fused_moe_triton.py --model /data00/models/DeepSeek-R1 -tp 8 --tune
WARNING 04-02 06:59:49 cuda.py:23] You are using a deprecated pynvml package. Please install nvidia-ml-py instead, and make sure to uninstall pynvml. When both of them are installed, pynvml will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
Warning: Your installation of OpenCV appears to be broken: module 'cv2.dnn' has no attribute 'DictValue'.Please follow the instructions at opencv/opencv-python#884 to correct your environment. The import of cv2 has been skipped.
Traceback (most recent call last):
File "/data00/models/tuning_fused_moe_triton.py", line 14, in
from sglang.srt.layers.moe.fused_moe_triton.fused_moe import (
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/layers/moe/fused_moe_triton/init.py", line 4, in
import sglang.srt.layers.moe.fused_moe_triton.fused_moe # noqa
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/layers/moe/fused_moe_triton/fused_moe.py", line 16, in
from sglang.srt.layers.quantization.fp8_kernel import per_token_group_quant_fp8
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/layers/quantization/init.py", line 54, in
from sglang.srt.layers.moe.fused_moe_triton.layer import FusedMoE
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/layers/moe/fused_moe_triton/layer.py", line 24, in
from sglang.srt.layers.moe.fused_moe_triton.fused_moe import fused_experts
ImportError: cannot import name 'fused_experts' from partially initialized module 'sglang.srt.layers.moe.fused_moe_triton.fused_moe' (most likely due to a circular import) (/usr/local/lib/python3.10/dist-packages/sglang/srt/layers/moe/fused_moe_triton/fused_moe.py)

Reproduction

device: H20-3e TP8
model: deepseek r1

Environment

Python: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H20-3e
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.144.03
PyTorch: 2.5.1+cu124
sglang: 0.4.4.post3
sgl_kernel: 0.0.5.post4
flashinfer: 0.1.6+cu124torch2.4
triton: 3.1.0
transformers: 4.50.0
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.9.3
fastapi: 0.115.8
hf_transfer: 0.1.9
huggingface_hub: 0.28.1
interegular: 0.3.3
modelscope: 1.22.3
orjson: 3.10.15
outlines: 0.0.46
packaging: 23.2
psutil: 5.9.4
pydantic: 2.10.6
multipart: Module Not Found
zmq: Module Not Found
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.6.4.post1
xgrammar: 0.1.17
openai: 1.60.2
tiktoken: 0.7.0
anthropic: 0.45.2
litellm: 1.59.10
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE SYS SYS SYS 0-55,112-167 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE NODE SYS SYS SYS 0-55,112-167 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE PIX SYS SYS SYS 0-55,112-167 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE SYS SYS SYS 0-55,112-167 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS PIX NODE NODE 56-111,168-223 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS NODE NODE NODE 56-111,168-223 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS NODE NODE PIX 56-111,168-223 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS NODE NODE NODE 56-111,168-223 1 N/A
NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE SYS SYS SYS
NIC1 NODE NODE PIX NODE SYS SYS SYS SYS NODE X SYS SYS SYS
NIC2 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS X NODE NODE
NIC3 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS NODE X NODE
NIC4 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS NODE NODE X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4

ulimit soft: 1048576

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions