chore: Stack server no longer depends on llama-stack-client

This commit is contained in:
Ashwin Bharambe 2025-11-06 11:49:37 -08:00
parent 9df073450f
commit 2221cc2cc4
12 changed files with 24 additions and 20 deletions

View file

@ -11,7 +11,7 @@ If you are planning to use an external service for Inference (even Ollama or TGI
This avoids the overhead of setting up a server.
```bash
# setup
uv pip install llama-stack
uv pip install llama-stack llama-stack-client
llama stack list-deps starter | xargs -L1 uv pip install
```

View file

@ -37,7 +37,7 @@
"outputs": [],
"source": [
"# NBVAL_SKIP\n",
"!pip install -U llama-stack\n",
"!pip install -U llama-stack llama-stack-client\n",
"llama stack list-deps fireworks | xargs -L1 uv pip install\n"
]
},

View file

@ -44,7 +44,7 @@
"outputs": [],
"source": [
"# NBVAL_SKIP\n",
"!pip install -U llama-stack"
"!pip install -U llama-stack llama-stack-client\n",
]
},
{

View file

@ -74,6 +74,7 @@
"source": [
"```bash\n",
"uv sync --extra dev\n",
"uv pip install -U llama-stack-client\n",
"uv pip install -e .\n",
"source .venv/bin/activate\n",
"```"