mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 20:14:13 +00:00
llama-toolchain -> llama-stack (hyphens)
This commit is contained in:
parent
6665d31cdf
commit
d1959e6889
9 changed files with 17 additions and 17 deletions
|
@ -1,6 +1,6 @@
|
|||
# Llama CLI Reference
|
||||
|
||||
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-toolchain` package.
|
||||
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package.
|
||||
|
||||
### Subcommands
|
||||
1. `download`: `llama` cli tools supports downloading the model from Meta or HuggingFace.
|
||||
|
@ -480,7 +480,7 @@ This server is running a Llama model locally.
|
|||
Once the server is setup, we can test it with a client to see the example outputs.
|
||||
```
|
||||
cd /path/to/llama-stack
|
||||
conda activate <env> # any environment containing the llama-toolchain pip package will work
|
||||
conda activate <env> # any environment containing the llama-stack pip package will work
|
||||
|
||||
python -m llama_stack.apis.inference.client localhost 5000
|
||||
```
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue