Skip to content

Commit feb9eb8

Browse files
docs: Remove datasets.rst and fix llama-stack build commands (#2061)
# Issue Closes #2073 # What does this PR do? - Removes the `datasets.rst` from the list of document urls as it no longer exists in torchtune. Referenced PR: pytorch/torchtune#1781 - Added a step to run `uv sync`. Previously, I would get the following error: ``` ➜ llama-stack git:(remove-deprecated-rst) uv venv --python 3.10 source .venv/bin/activate Using CPython 3.10.13 interpreter at: /usr/bin/python3.10 Creating virtual environment at: .venv Activate with: source .venv/bin/activate (llama-stack) ➜ llama-stack git:(remove-deprecated-rst) INFERENCE_MODEL=llama3.2:3b llama stack build --template ollama --image-type venv --run zsh: llama: command not found... ``` [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan To test: Run through `rag_agent` example in the `detailed_tutorial.md` file. [//]: # (## Documentation)
1 parent c219a74 commit feb9eb8

File tree

2 files changed

+1
-5
lines changed

2 files changed

+1
-5
lines changed

docs/notebooks/Llama_Stack_RAG_Lifecycle.ipynb

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -840,7 +840,6 @@
840840
" \"memory_optimizations.rst\",\n",
841841
" \"chat.rst\",\n",
842842
" \"llama3.rst\",\n",
843-
" \"datasets.rst\",\n",
844843
" \"qat_finetune.rst\",\n",
845844
" \"lora_finetune.rst\",\n",
846845
"]\n",
@@ -1586,7 +1585,6 @@
15861585
" \"memory_optimizations.rst\",\n",
15871586
" \"chat.rst\",\n",
15881587
" \"llama3.rst\",\n",
1589-
" \"datasets.rst\",\n",
15901588
" \"qat_finetune.rst\",\n",
15911589
" \"lora_finetune.rst\",\n",
15921590
"]\n",

docs/source/getting_started/detailed_tutorial.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | ie
4242
Setup your virtual environment.
4343

4444
```bash
45-
uv venv --python 3.10
45+
uv sync --python 3.10
4646
source .venv/bin/activate
4747
```
4848
## Step 2: Run Llama Stack
@@ -445,7 +445,6 @@ from llama_stack_client import LlamaStackClient
445445
from llama_stack_client import Agent, AgentEventLogger
446446
from llama_stack_client.types import Document
447447
import uuid
448-
from termcolor import cprint
449448

450449
client = LlamaStackClient(base_url="http://localhost:8321")
451450

@@ -463,7 +462,6 @@ urls = [
463462
"memory_optimizations.rst",
464463
"chat.rst",
465464
"llama3.rst",
466-
"datasets.rst",
467465
"qat_finetune.rst",
468466
"lora_finetune.rst",
469467
]

0 commit comments

Comments
 (0)