forked from phoenix-oss/llama-stack-mirror
fix --endpoint docs
This commit is contained in:
parent
7ba95a8e74
commit
28ce511986
1 changed files with 3 additions and 2 deletions
|
@ -51,7 +51,8 @@ pip install llama-stack-client
|
||||||
Let's use the `llama-stack-client` CLI to check the connectivity to the server.
|
Let's use the `llama-stack-client` CLI to check the connectivity to the server.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
llama-stack-client --endpoint http://localhost:$LLAMA_STACK_PORT models list
|
llama-stack-client configure --endpoint http://localhost:$LLAMA_STACK_PORT
|
||||||
|
llama-stack-client models list
|
||||||
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓
|
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓
|
||||||
┃ identifier ┃ provider_id ┃ provider_resource_id ┃ metadata ┃
|
┃ identifier ┃ provider_id ┃ provider_resource_id ┃ metadata ┃
|
||||||
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩
|
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩
|
||||||
|
@ -61,7 +62,7 @@ llama-stack-client --endpoint http://localhost:$LLAMA_STACK_PORT models list
|
||||||
|
|
||||||
You can test basic Llama inference completion using the CLI too.
|
You can test basic Llama inference completion using the CLI too.
|
||||||
```bash
|
```bash
|
||||||
llama-stack-client --endpoint http://localhost:$LLAMA_STACK_PORT \
|
llama-stack-client
|
||||||
inference chat-completion \
|
inference chat-completion \
|
||||||
--message "hello, what model are you?"
|
--message "hello, what model are you?"
|
||||||
```
|
```
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue