Skip to content

Commit 2e5d913

Browse files
reidliu41dbyoung18
authored andcommitted
[doc] add open-webui example (vllm-project#16747)
Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]>
1 parent 34bf3bc commit 2e5d913

File tree

3 files changed

+30
-0
lines changed

3 files changed

+30
-0
lines changed
67.7 KB
Loading

docs/source/deployment/frameworks/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ dstack
99
helm
1010
lws
1111
modal
12+
open-webui
1213
skypilot
1314
triton
1415
:::
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
(deployment-open-webui)=
2+
3+
# Open WebUI
4+
5+
1. Install the (Docker)[https://docs.docker.com/engine/install/]
6+
7+
2. Start the vLLM server with the supported chat completion model, e.g.
8+
9+
```console
10+
vllm serve qwen/Qwen1.5-0.5B-Chat
11+
```
12+
13+
1. Start the (Open WebUI)[https://github.com/open-webui/open-webui] docker container (replace the vllm serve host and vllm serve port):
14+
15+
```console
16+
docker run -d -p 3000:8080 \
17+
--name open-webui \
18+
-v open-webui:/app/backend/data \
19+
-e OPENAI_API_BASE_URL=http://<vllm serve host>:<vllm serve port>/v1 \
20+
--restart always \
21+
ghcr.io/open-webui/open-webui:main
22+
```
23+
24+
1. Open it in the browser: <http://open-webui-host:3000/>
25+
26+
On the top of the web page, you can see the model `qwen/Qwen1.5-0.5B-Chat`.
27+
28+
:::{image} /assets/deployment/open_webui.png
29+
:::

0 commit comments

Comments
 (0)