Skip to content

Support Eagle2 for Triton backend #3466

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Feb 10, 2025
Merged

Conversation

ispobock
Copy link
Collaborator

@ispobock ispobock commented Feb 10, 2025

Motivation

Support Eagle2 for Triton backend and achieve 2.6x speedup on batch size 1 for no cuda graph setting. (Based on #3317, #3309, #3292)
CUDA graph will be supported in follow-up PR.

python3 -m sglang.launch_server --model meta-llama/Llama-2-7b-chat-hf --disable-radix --disable-cuda-graph --attention-backend triton

speed: 71.34 token/s

python3 -m sglang.launch_server --model meta-llama/Llama-2-7b-chat-hf  --speculative-algo EAGLE --speculative-draft lmzheng/sglang-EAGLE-llama2-chat-7B --speculative-num-steps 5 --speculative-eagle-topk 8 --speculative-num-draft-tokens 64 --mem-fraction 0.8 --disable-radix --disable-cuda-graph --attention-backend triton

speed: 185.54 token/s

Test script is referenced from #2150.

Thanks @aspctu for helping debug!

@zhyncs zhyncs merged commit 2d61132 into sgl-project:main Feb 10, 2025
18 of 19 checks passed
@zhyncs zhyncs mentioned this pull request Feb 10, 2025
13 tasks
@feifeibear feifeibear mentioned this pull request Feb 11, 2025
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants