Skip to content

Commit 37213bb

Browse files
simveityifanzhang-pro
authored andcommitted
update eagle-3 docs (sgl-project#4796)
Co-authored-by: Yifan Zhang <[email protected]>
1 parent 5d27532 commit 37213bb

File tree

1 file changed

+40
-24
lines changed

1 file changed

+40
-24
lines changed

docs/backend/speculative_decoding.ipynb

Lines changed: 40 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -6,18 +6,19 @@
66
"source": [
77
"# Speculative Decoding\n",
88
"\n",
9-
"SGLang now provides an EAGLE2-based speculative decoding option. Our implementation aims to maximize speed and efficiency and is considered to be among the fastest in open-source LLM engines.\n",
10-
"\n",
9+
"SGLang now provides an EAGLE-based (EAGLE-2/EAGLE-3) speculative decoding option. Our implementation aims to maximize speed and efficiency and is considered to be among the fastest in open-source LLM engines.\n",
1110
"**Note:** Currently, Speculative Decoding in SGLang is compatible with radix cache and chunked prefill.\n",
1211
"\n",
1312
"### Performance Highlights\n",
1413
"\n",
15-
"- Official EAGLE code ([SafeAILab/EAGLE](https://github.com/SafeAILab/EAGLE)): ~200 tokens/s\n",
16-
"- Standard SGLang Decoding: ~156 tokens/s\n",
17-
"- EAGLE Decoding in SGLang: ~297 tokens/s\n",
18-
"- EAGLE Decoding in SGLang (w/ `torch.compile`): ~316 tokens/s\n",
14+
"Please see below for the huge improvements on throughput for LLaMA-Instruct 3.1 8B tested on MT bench that can be archieved via EAGLE3 decoding.\n",
15+
"For further details please see the [EAGLE3 paper](https://arxiv.org/pdf/2503.01840).\n",
1916
"\n",
20-
"All benchmarks below were run on a single H100."
17+
"| Method | Throughput (tokens/s) |\n",
18+
"|--------|----------------|\n",
19+
"| SGLang (w/o speculative, 1x H100) | 158.34 tokens/s |\n",
20+
"| SGLang + EAGLE-2 (1x H100) | 244.10 tokens/s |\n",
21+
"| SGLang + EAGLE-3 (1x H100) | 373.25 tokens/s |\n"
2122
]
2223
},
2324
{
@@ -32,7 +33,18 @@
3233
"\n",
3334
"* `speculative_eagle_topk`: Branching factor per step. Improves candidate diversity, will lead to higher acceptance rate, but more lead to higher memory/compute consumption. Default is 4.\n",
3435
"\n",
35-
"* `speculative_num_draft_tokens`: Maximum parallel verification capacity. Allows deeper tree evaluation but will lead to higher GPU memory usage. Default is 8."
36+
"* `speculative_num_draft_tokens`: Maximum parallel verification capacity. Allows deeper tree evaluation but will lead to higher GPU memory usage. Default is 8.\n",
37+
"\n",
38+
"These parameters are the same for EAGLE-2 and EAGLE-3."
39+
]
40+
},
41+
{
42+
"cell_type": "markdown",
43+
"metadata": {},
44+
"source": [
45+
"### EAGLE-2 decoding\n",
46+
"\n",
47+
"You can enable EAGLE-2 decoding by setting `--speculative_algorithm EAGLE` and choosing an appropriate model."
3648
]
3749
},
3850
{
@@ -50,6 +62,15 @@
5062
"\n",
5163
"from sglang.utils import wait_for_server, print_highlight, terminate_process\n",
5264
"\n",
65+
"import openai"
66+
]
67+
},
68+
{
69+
"cell_type": "code",
70+
"execution_count": null,
71+
"metadata": {},
72+
"outputs": [],
73+
"source": [
5374
"server_process, port = launch_server_cmd(\n",
5475
" \"\"\"\n",
5576
"python3 -m sglang.launch_server --model meta-llama/Llama-2-7b-chat-hf --speculative-algorithm EAGLE \\\n",
@@ -67,12 +88,10 @@
6788
"metadata": {},
6889
"outputs": [],
6990
"source": [
70-
"import openai\n",
71-
"\n",
7291
"client = openai.Client(base_url=f\"http://127.0.0.1:{port}/v1\", api_key=\"None\")\n",
7392
"\n",
7493
"response = client.chat.completions.create(\n",
75-
" model=\"meta-llama/Meta-Llama-3.1-8B-Instruct\",\n",
94+
" model=\"meta-llama/Llama-2-7b-chat-hf\",\n",
7695
" messages=[\n",
7796
" {\"role\": \"user\", \"content\": \"List 3 countries and their capitals.\"},\n",
7897
" ],\n",
@@ -96,9 +115,9 @@
96115
"cell_type": "markdown",
97116
"metadata": {},
98117
"source": [
99-
"### EAGLE Decoding with `torch.compile`\n",
118+
"### EAGLE-2 Decoding with `torch.compile`\n",
100119
"\n",
101-
"You can also enable `torch.compile` for further optimizations and optionally set `--cuda-graph-max-bs`:\n"
120+
"You can also enable `torch.compile` for further optimizations and optionally set `--torch-compile-max-bs`:\n"
102121
]
103122
},
104123
{
@@ -112,7 +131,7 @@
112131
"python3 -m sglang.launch_server --model meta-llama/Llama-2-7b-chat-hf --speculative-algorithm EAGLE \\\n",
113132
" --speculative-draft-model-path lmsys/sglang-EAGLE-llama2-chat-7B --speculative-num-steps 5 \\\n",
114133
" --speculative-eagle-topk 8 --speculative-num-draft-tokens 64 --mem-fraction 0.6 \\\n",
115-
" --enable-torch-compile --cuda-graph-max-bs 2\n",
134+
" --enable-torch-compile --torch-compile-max-bs 2\n",
116135
"\"\"\"\n",
117136
")\n",
118137
"\n",
@@ -125,12 +144,10 @@
125144
"metadata": {},
126145
"outputs": [],
127146
"source": [
128-
"import openai\n",
129-
"\n",
130147
"client = openai.Client(base_url=f\"http://127.0.0.1:{port}/v1\", api_key=\"None\")\n",
131148
"\n",
132149
"response = client.chat.completions.create(\n",
133-
" model=\"meta-llama/Meta-Llama-3.1-8B-Instruct\",\n",
150+
" model=\"meta-llama/Llama-2-7b-chat-hf\",\n",
134151
" messages=[\n",
135152
" {\"role\": \"user\", \"content\": \"List 3 countries and their capitals.\"},\n",
136153
" ],\n",
@@ -154,7 +171,7 @@
154171
"cell_type": "markdown",
155172
"metadata": {},
156173
"source": [
157-
"### EAGLE Decoding via Frequency-Ranked Speculative Sampling\n",
174+
"### EAGLE-2 Decoding via Frequency-Ranked Speculative Sampling\n",
158175
"\n",
159176
"By employing a truncated high-frequency token vocabulary in the draft model, Eagle speculative decoding reduces `lm_head` computational overhead while accelerating the pipeline without quality degradation. For more details, checkout [the paper](https://arxiv.org/pdf/arXiv:2502.14856).\n",
160177
"\n",
@@ -187,8 +204,6 @@
187204
"metadata": {},
188205
"outputs": [],
189206
"source": [
190-
"import openai\n",
191-
"\n",
192207
"client = openai.Client(base_url=f\"http://127.0.0.1:{port}/v1\", api_key=\"None\")\n",
193208
"\n",
194209
"response = client.chat.completions.create(\n",
@@ -218,7 +233,7 @@
218233
"source": [
219234
"### EAGLE-3 Decoding\n",
220235
"\n",
221-
"You can enable EAGLE-3 decoding by setting `--speculative_draft_model_path: EAGLE3`:"
236+
"You can enable EAGLE-3 decoding by setting `--speculative_algorithm EAGLE3` and choosing an appropriate model."
222237
]
223238
},
224239
{
@@ -245,8 +260,6 @@
245260
"metadata": {},
246261
"outputs": [],
247262
"source": [
248-
"import openai\n",
249-
"\n",
250263
"client = openai.Client(base_url=f\"http://127.0.0.1:{port}/v1\", api_key=\"None\")\n",
251264
"\n",
252265
"response = client.chat.completions.create(\n",
@@ -283,7 +296,10 @@
283296
"- EAGLE-2 additionally uses the draft model to evaluate how probable certain branches in the draft tree are, dynamically stopping the expansion of unlikely branches. After the expansion phase, reranking is employed to select only the top `speculative_num_draft_tokens` final nodes as draft tokens.\n",
284297
"- EAGLE-3 removes the feature prediction objective, incorporates low and mid-layer features, and is trained in an on-policy manner.\n",
285298
"\n",
286-
"This enhances drafting accuracy by operating on the features instead of tokens for more regular inputs and passing the tokens from the next timestep additionaly to minimize randomness effects from sampling. Furthermore the dynamic adjustment of the draft tree and selection of reranked final nodes increases acceptance rate of draft tokens further. For more details see [the paper](https://arxiv.org/abs/2406.16858)."
299+
"This enhances drafting accuracy by operating on the features instead of tokens for more regular inputs and passing the tokens from the next timestep additionaly to minimize randomness effects from sampling. Furthermore the dynamic adjustment of the draft tree and selection of reranked final nodes increases acceptance rate of draft tokens further. For more details see [EAGLE-2](https://arxiv.org/abs/2406.16858) and [EAGLE-3](https://arxiv.org/abs/2503.01840) paper.\n",
300+
"\n",
301+
"\n",
302+
"For guidance how to train your own EAGLE model please see the [EAGLE repo](https://github.com/SafeAILab/EAGLE/tree/main?tab=readme-ov-file#train)."
287303
]
288304
}
289305
],

0 commit comments

Comments
 (0)