Skip to content

update eagle-3 docs #4796

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Apr 3, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 40 additions & 24 deletions docs/backend/speculative_decoding.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,18 +6,19 @@
"source": [
"# Speculative Decoding\n",
"\n",
"SGLang now provides an EAGLE2-based speculative decoding option. Our implementation aims to maximize speed and efficiency and is considered to be among the fastest in open-source LLM engines.\n",
"\n",
"SGLang now provides an EAGLE-based (EAGLE-2/EAGLE-3) speculative decoding option. Our implementation aims to maximize speed and efficiency and is considered to be among the fastest in open-source LLM engines.\n",
"**Note:** Currently, Speculative Decoding in SGLang is compatible with radix cache and chunked prefill.\n",
"\n",
"### Performance Highlights\n",
"\n",
"- Official EAGLE code ([SafeAILab/EAGLE](https://github.com/SafeAILab/EAGLE)): ~200 tokens/s\n",
"- Standard SGLang Decoding: ~156 tokens/s\n",
"- EAGLE Decoding in SGLang: ~297 tokens/s\n",
"- EAGLE Decoding in SGLang (w/ `torch.compile`): ~316 tokens/s\n",
"Please see below for the huge improvements on throughput for LLaMA-Instruct 3.1 8B tested on MT bench that can be archieved via EAGLE3 decoding.\n",
"For further details please see the [EAGLE3 paper](https://arxiv.org/pdf/2503.01840).\n",
"\n",
"All benchmarks below were run on a single H100."
"| Method | Throughput (tokens/s) |\n",
"|--------|----------------|\n",
"| SGLang (w/o speculative, 1x H100) | 158.34 tokens/s |\n",
"| SGLang + EAGLE-2 (1x H100) | 244.10 tokens/s |\n",
"| SGLang + EAGLE-3 (1x H100) | 373.25 tokens/s |\n"
]
},
{
Expand All @@ -32,7 +33,18 @@
"\n",
"* `speculative_eagle_topk`: Branching factor per step. Improves candidate diversity, will lead to higher acceptance rate, but more lead to higher memory/compute consumption. Default is 4.\n",
"\n",
"* `speculative_num_draft_tokens`: Maximum parallel verification capacity. Allows deeper tree evaluation but will lead to higher GPU memory usage. Default is 8."
"* `speculative_num_draft_tokens`: Maximum parallel verification capacity. Allows deeper tree evaluation but will lead to higher GPU memory usage. Default is 8.\n",
"\n",
"These parameters are the same for EAGLE-2 and EAGLE-3."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### EAGLE-2 decoding\n",
"\n",
"You can enable EAGLE-2 decoding by setting `--speculative_algorithm EAGLE` and choosing an appropriate model."
]
},
{
Expand All @@ -50,6 +62,15 @@
"\n",
"from sglang.utils import wait_for_server, print_highlight, terminate_process\n",
"\n",
"import openai"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"server_process, port = launch_server_cmd(\n",
" \"\"\"\n",
"python3 -m sglang.launch_server --model meta-llama/Llama-2-7b-chat-hf --speculative-algorithm EAGLE \\\n",
Expand All @@ -67,12 +88,10 @@
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"\n",
"client = openai.Client(base_url=f\"http://127.0.0.1:{port}/v1\", api_key=\"None\")\n",
"\n",
"response = client.chat.completions.create(\n",
" model=\"meta-llama/Meta-Llama-3.1-8B-Instruct\",\n",
" model=\"meta-llama/Llama-2-7b-chat-hf\",\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": \"List 3 countries and their capitals.\"},\n",
" ],\n",
Expand All @@ -96,9 +115,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### EAGLE Decoding with `torch.compile`\n",
"### EAGLE-2 Decoding with `torch.compile`\n",
"\n",
"You can also enable `torch.compile` for further optimizations and optionally set `--cuda-graph-max-bs`:\n"
"You can also enable `torch.compile` for further optimizations and optionally set `--torch-compile-max-bs`:\n"
]
},
{
Expand All @@ -112,7 +131,7 @@
"python3 -m sglang.launch_server --model meta-llama/Llama-2-7b-chat-hf --speculative-algorithm EAGLE \\\n",
" --speculative-draft-model-path lmsys/sglang-EAGLE-llama2-chat-7B --speculative-num-steps 5 \\\n",
" --speculative-eagle-topk 8 --speculative-num-draft-tokens 64 --mem-fraction 0.6 \\\n",
" --enable-torch-compile --cuda-graph-max-bs 2\n",
" --enable-torch-compile --torch-compile-max-bs 2\n",
"\"\"\"\n",
")\n",
"\n",
Expand All @@ -125,12 +144,10 @@
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"\n",
"client = openai.Client(base_url=f\"http://127.0.0.1:{port}/v1\", api_key=\"None\")\n",
"\n",
"response = client.chat.completions.create(\n",
" model=\"meta-llama/Meta-Llama-3.1-8B-Instruct\",\n",
" model=\"meta-llama/Llama-2-7b-chat-hf\",\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": \"List 3 countries and their capitals.\"},\n",
" ],\n",
Expand All @@ -154,7 +171,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### EAGLE Decoding via Frequency-Ranked Speculative Sampling\n",
"### EAGLE-2 Decoding via Frequency-Ranked Speculative Sampling\n",
"\n",
"By employing a truncated high-frequency token vocabulary in the draft model, Eagle speculative decoding reduces `lm_head` computational overhead while accelerating the pipeline without quality degradation. For more details, checkout [the paper](https://arxiv.org/pdf/arXiv:2502.14856).\n",
"\n",
Expand Down Expand Up @@ -187,8 +204,6 @@
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"\n",
"client = openai.Client(base_url=f\"http://127.0.0.1:{port}/v1\", api_key=\"None\")\n",
"\n",
"response = client.chat.completions.create(\n",
Expand Down Expand Up @@ -218,7 +233,7 @@
"source": [
"### EAGLE-3 Decoding\n",
"\n",
"You can enable EAGLE-3 decoding by setting `--speculative_draft_model_path: EAGLE3`:"
"You can enable EAGLE-3 decoding by setting `--speculative_algorithm EAGLE3` and choosing an appropriate model."
]
},
{
Expand All @@ -245,8 +260,6 @@
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"\n",
"client = openai.Client(base_url=f\"http://127.0.0.1:{port}/v1\", api_key=\"None\")\n",
"\n",
"response = client.chat.completions.create(\n",
Expand Down Expand Up @@ -283,7 +296,10 @@
"- EAGLE-2 additionally uses the draft model to evaluate how probable certain branches in the draft tree are, dynamically stopping the expansion of unlikely branches. After the expansion phase, reranking is employed to select only the top `speculative_num_draft_tokens` final nodes as draft tokens.\n",
"- EAGLE-3 removes the feature prediction objective, incorporates low and mid-layer features, and is trained in an on-policy manner.\n",
"\n",
"This enhances drafting accuracy by operating on the features instead of tokens for more regular inputs and passing the tokens from the next timestep additionaly to minimize randomness effects from sampling. Furthermore the dynamic adjustment of the draft tree and selection of reranked final nodes increases acceptance rate of draft tokens further. For more details see [the paper](https://arxiv.org/abs/2406.16858)."
"This enhances drafting accuracy by operating on the features instead of tokens for more regular inputs and passing the tokens from the next timestep additionaly to minimize randomness effects from sampling. Furthermore the dynamic adjustment of the draft tree and selection of reranked final nodes increases acceptance rate of draft tokens further. For more details see [EAGLE-2](https://arxiv.org/abs/2406.16858) and [EAGLE-3](https://arxiv.org/abs/2503.01840) paper.\n",
"\n",
"\n",
"For guidance how to train your own EAGLE model please see the [EAGLE repo](https://github.com/SafeAILab/EAGLE/tree/main?tab=readme-ov-file#train)."
]
}
],
Expand Down
Loading