mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-12 04:50:39 +00:00
docs: remove pure venv references (#3047)
# What does this PR do? <!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. --> Remove pure venv (without uv) references in docs <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> ## Test Plan <!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* -->
This commit is contained in:
parent
e9fced773a
commit
8ba04205ac
4 changed files with 7 additions and 13 deletions
|
@ -56,12 +56,12 @@ Breaking down the demo app, this section will show the core pieces that are used
|
||||||
### Setup Remote Inferencing
|
### Setup Remote Inferencing
|
||||||
Start a Llama Stack server on localhost. Here is an example of how you can do this using the firework.ai distribution:
|
Start a Llama Stack server on localhost. Here is an example of how you can do this using the firework.ai distribution:
|
||||||
```
|
```
|
||||||
python -m venv stack-fireworks
|
uv venv starter --python 3.12
|
||||||
source stack-fireworks/bin/activate # On Windows: stack-fireworks\Scripts\activate
|
source starter/bin/activate # On Windows: starter\Scripts\activate
|
||||||
pip install --no-cache llama-stack==0.2.2
|
pip install --no-cache llama-stack==0.2.2
|
||||||
llama stack build --distro fireworks --image-type venv
|
llama stack build --distro starter --image-type venv
|
||||||
export FIREWORKS_API_KEY=<SOME_KEY>
|
export FIREWORKS_API_KEY=<SOME_KEY>
|
||||||
llama stack run fireworks --port 5050
|
llama stack run starter --port 5050
|
||||||
```
|
```
|
||||||
|
|
||||||
Ensure the Llama Stack server version is the same as the Kotlin SDK Library for maximum compatibility.
|
Ensure the Llama Stack server version is the same as the Kotlin SDK Library for maximum compatibility.
|
||||||
|
|
|
@ -150,13 +150,7 @@ pip install llama-stack-client
|
||||||
```
|
```
|
||||||
:::
|
:::
|
||||||
|
|
||||||
:::{tab-item} Install with `venv`
|
|
||||||
```bash
|
|
||||||
python -m venv stack-client
|
|
||||||
source stack-client/bin/activate # On Windows: stack-client\Scripts\activate
|
|
||||||
pip install llama-stack-client
|
|
||||||
```
|
|
||||||
:::
|
|
||||||
::::
|
::::
|
||||||
|
|
||||||
Now let's use the `llama-stack-client` [CLI](../references/llama_stack_client_cli_reference.md) to check the
|
Now let's use the `llama-stack-client` [CLI](../references/llama_stack_client_cli_reference.md) to check the
|
||||||
|
|
|
@ -19,7 +19,7 @@ You have two ways to install Llama Stack:
|
||||||
cd ~/local
|
cd ~/local
|
||||||
git clone git@github.com:meta-llama/llama-stack.git
|
git clone git@github.com:meta-llama/llama-stack.git
|
||||||
|
|
||||||
python -m venv myenv
|
uv venv myenv --python 3.12
|
||||||
source myenv/bin/activate # On Windows: myenv\Scripts\activate
|
source myenv/bin/activate # On Windows: myenv\Scripts\activate
|
||||||
|
|
||||||
cd llama-stack
|
cd llama-stack
|
||||||
|
|
|
@ -19,7 +19,7 @@ You have two ways to install Llama Stack:
|
||||||
cd ~/local
|
cd ~/local
|
||||||
git clone git@github.com:meta-llama/llama-stack.git
|
git clone git@github.com:meta-llama/llama-stack.git
|
||||||
|
|
||||||
python -m venv myenv
|
uv venv myenv --python 3.12
|
||||||
source myenv/bin/activate # On Windows: myenv\Scripts\activate
|
source myenv/bin/activate # On Windows: myenv\Scripts\activate
|
||||||
|
|
||||||
cd llama-stack
|
cd llama-stack
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue