mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 19:04:19 +00:00
docs: Fixing outputs in client cli and formatting suggestions (#1668)
**Description:** Updates the client example output as well as add a suggested formatting for some of the required and optional cli flags. If the re-formatting is unnecessary, I can remove it from this PR and just have this fix the example output
This commit is contained in:
parent
f11b6db40d
commit
ac51564ad5
1 changed files with 66 additions and 43 deletions
|
@ -6,17 +6,32 @@ The `llama-stack-client` CLI allows you to query information about the distribut
|
||||||
|
|
||||||
### `llama-stack-client`
|
### `llama-stack-client`
|
||||||
```bash
|
```bash
|
||||||
llama-stack-client -h
|
llama-stack-client
|
||||||
|
Usage: llama-stack-client [OPTIONS] COMMAND [ARGS]...
|
||||||
|
|
||||||
usage: llama-stack-client [-h] {models,memory_banks,shields} ...
|
Welcome to the LlamaStackClient CLI
|
||||||
|
|
||||||
Welcome to the LlamaStackClient CLI
|
Options:
|
||||||
|
--version Show the version and exit.
|
||||||
|
--endpoint TEXT Llama Stack distribution endpoint
|
||||||
|
--api-key TEXT Llama Stack distribution API key
|
||||||
|
--config TEXT Path to config file
|
||||||
|
--help Show this message and exit.
|
||||||
|
|
||||||
options:
|
Commands:
|
||||||
-h, --help show this help message and exit
|
configure Configure Llama Stack Client CLI.
|
||||||
|
datasets Manage datasets.
|
||||||
subcommands:
|
eval Run evaluation tasks.
|
||||||
{models,memory_banks,shields}
|
eval_tasks Manage evaluation tasks.
|
||||||
|
inference Inference (chat).
|
||||||
|
inspect Inspect server configuration.
|
||||||
|
models Manage GenAI models.
|
||||||
|
post_training Post-training.
|
||||||
|
providers Manage API providers.
|
||||||
|
scoring_functions Manage scoring functions.
|
||||||
|
shields Manage safety shield services.
|
||||||
|
toolgroups Manage available tool groups.
|
||||||
|
vector_dbs Manage vector databases.
|
||||||
```
|
```
|
||||||
|
|
||||||
### `llama-stack-client configure`
|
### `llama-stack-client configure`
|
||||||
|
@ -127,11 +142,11 @@ llama-stack-client vector_dbs list
|
||||||
llama-stack-client vector_dbs register <vector-db-id> [--provider-id <provider-id>] [--provider-vector-db-id <provider-vector-db-id>] [--embedding-model <embedding-model>] [--embedding-dimension <embedding-dimension>]
|
llama-stack-client vector_dbs register <vector-db-id> [--provider-id <provider-id>] [--provider-vector-db-id <provider-vector-db-id>] [--embedding-model <embedding-model>] [--embedding-dimension <embedding-dimension>]
|
||||||
```
|
```
|
||||||
|
|
||||||
Options:
|
Optional arguments:
|
||||||
- `--provider-id`: Optional. Provider ID for the vector db
|
- `--provider-id`: Provider ID for the vector db
|
||||||
- `--provider-vector-db-id`: Optional. Provider's vector db ID
|
- `--provider-vector-db-id`: Provider's vector db ID
|
||||||
- `--embedding-model`: Optional. Embedding model to use. Default: "all-MiniLM-L6-v2"
|
- `--embedding-model`: Embedding model to use. Default: "all-MiniLM-L6-v2"
|
||||||
- `--embedding-dimension`: Optional. Dimension of embeddings. Default: 384
|
- `--embedding-dimension`: Dimension of embeddings. Default: 384
|
||||||
|
|
||||||
### `llama-stack-client vector_dbs unregister`
|
### `llama-stack-client vector_dbs unregister`
|
||||||
```bash
|
```bash
|
||||||
|
@ -157,11 +172,13 @@ llama-stack-client shields list
|
||||||
llama-stack-client shields register --shield-id <shield-id> [--provider-id <provider-id>] [--provider-shield-id <provider-shield-id>] [--params <params>]
|
llama-stack-client shields register --shield-id <shield-id> [--provider-id <provider-id>] [--provider-shield-id <provider-shield-id>] [--params <params>]
|
||||||
```
|
```
|
||||||
|
|
||||||
Options:
|
Required arguments:
|
||||||
- `--shield-id`: Required. ID of the shield
|
- `--shield-id`: ID of the shield
|
||||||
- `--provider-id`: Optional. Provider ID for the shield
|
|
||||||
- `--provider-shield-id`: Optional. Provider's shield ID
|
Optional arguments:
|
||||||
- `--params`: Optional. JSON configuration parameters for the shield
|
- `--provider-id`: Provider ID for the shield
|
||||||
|
- `--provider-shield-id`: Provider's shield ID
|
||||||
|
- `--params`: JSON configuration parameters for the shield
|
||||||
|
|
||||||
## Eval Task Management
|
## Eval Task Management
|
||||||
|
|
||||||
|
@ -175,13 +192,15 @@ llama-stack-client benchmarks list
|
||||||
llama-stack-client benchmarks register --eval-task-id <eval-task-id> --dataset-id <dataset-id> --scoring-functions <function1> [<function2> ...] [--provider-id <provider-id>] [--provider-eval-task-id <provider-eval-task-id>] [--metadata <metadata>]
|
llama-stack-client benchmarks register --eval-task-id <eval-task-id> --dataset-id <dataset-id> --scoring-functions <function1> [<function2> ...] [--provider-id <provider-id>] [--provider-eval-task-id <provider-eval-task-id>] [--metadata <metadata>]
|
||||||
```
|
```
|
||||||
|
|
||||||
Options:
|
Required arguments:
|
||||||
- `--eval-task-id`: Required. ID of the eval task
|
- `--eval-task-id`: ID of the eval task
|
||||||
- `--dataset-id`: Required. ID of the dataset to evaluate
|
- `--dataset-id`: ID of the dataset to evaluate
|
||||||
- `--scoring-functions`: Required. One or more scoring functions to use for evaluation
|
- `--scoring-functions`: One or more scoring functions to use for evaluation
|
||||||
- `--provider-id`: Optional. Provider ID for the eval task
|
|
||||||
- `--provider-eval-task-id`: Optional. Provider's eval task ID
|
Optional arguments:
|
||||||
- `--metadata`: Optional. Metadata for the eval task in JSON format
|
- `--provider-id`: Provider ID for the eval task
|
||||||
|
- `--provider-eval-task-id`: Provider's eval task ID
|
||||||
|
- `--metadata`: Metadata for the eval task in JSON format
|
||||||
|
|
||||||
## Eval execution
|
## Eval execution
|
||||||
### `llama-stack-client eval run-benchmark`
|
### `llama-stack-client eval run-benchmark`
|
||||||
|
@ -189,11 +208,13 @@ Options:
|
||||||
llama-stack-client eval run-benchmark <eval-task-id1> [<eval-task-id2> ...] --eval-task-config <config-file> --output-dir <output-dir> [--num-examples <num>] [--visualize]
|
llama-stack-client eval run-benchmark <eval-task-id1> [<eval-task-id2> ...] --eval-task-config <config-file> --output-dir <output-dir> [--num-examples <num>] [--visualize]
|
||||||
```
|
```
|
||||||
|
|
||||||
Options:
|
Required arguments:
|
||||||
- `--eval-task-config`: Required. Path to the eval task config file in JSON format
|
- `--eval-task-config`: Path to the eval task config file in JSON format
|
||||||
- `--output-dir`: Required. Path to the directory where evaluation results will be saved
|
- `--output-dir`: Path to the directory where evaluation results will be saved
|
||||||
- `--num-examples`: Optional. Number of examples to evaluate (useful for debugging)
|
|
||||||
- `--visualize`: Optional flag. If set, visualizes evaluation results after completion
|
Optional arguments:
|
||||||
|
- `--num-examples`: Number of examples to evaluate (useful for debugging)
|
||||||
|
- `--visualize`: If set, visualizes evaluation results after completion
|
||||||
|
|
||||||
Example benchmark_config.json:
|
Example benchmark_config.json:
|
||||||
```json
|
```json
|
||||||
|
@ -214,11 +235,13 @@ Example benchmark_config.json:
|
||||||
llama-stack-client eval run-scoring <eval-task-id> --eval-task-config <config-file> --output-dir <output-dir> [--num-examples <num>] [--visualize]
|
llama-stack-client eval run-scoring <eval-task-id> --eval-task-config <config-file> --output-dir <output-dir> [--num-examples <num>] [--visualize]
|
||||||
```
|
```
|
||||||
|
|
||||||
Options:
|
Required arguments:
|
||||||
- `--eval-task-config`: Required. Path to the eval task config file in JSON format
|
- `--eval-task-config`: Path to the eval task config file in JSON format
|
||||||
- `--output-dir`: Required. Path to the directory where scoring results will be saved
|
- `--output-dir`: Path to the directory where scoring results will be saved
|
||||||
- `--num-examples`: Optional. Number of examples to evaluate (useful for debugging)
|
|
||||||
- `--visualize`: Optional flag. If set, visualizes scoring results after completion
|
Optional arguments:
|
||||||
|
- `--num-examples`: Number of examples to evaluate (useful for debugging)
|
||||||
|
- `--visualize`: If set, visualizes scoring results after completion
|
||||||
|
|
||||||
## Tool Group Management
|
## Tool Group Management
|
||||||
|
|
||||||
|
@ -230,11 +253,11 @@ llama-stack-client toolgroups list
|
||||||
+---------------------------+------------------+------+---------------+
|
+---------------------------+------------------+------+---------------+
|
||||||
| identifier | provider_id | args | mcp_endpoint |
|
| identifier | provider_id | args | mcp_endpoint |
|
||||||
+===========================+==================+======+===============+
|
+===========================+==================+======+===============+
|
||||||
| builtin::code_interpreter | code-interpreter | None | None |
|
| builtin::code_interpreter | code-interpreter | None | None |
|
||||||
+---------------------------+------------------+------+---------------+
|
+---------------------------+------------------+------+---------------+
|
||||||
| builtin::rag | rag-runtime | None | None |
|
| builtin::rag | rag-runtime | None | None |
|
||||||
+---------------------------+------------------+------+---------------+
|
+---------------------------+------------------+------+---------------+
|
||||||
| builtin::websearch | tavily-search | None | None |
|
| builtin::websearch | tavily-search | None | None |
|
||||||
+---------------------------+------------------+------+---------------+
|
+---------------------------+------------------+------+---------------+
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -250,11 +273,11 @@ Shows detailed information about a specific toolgroup. If the toolgroup is not f
|
||||||
llama-stack-client toolgroups register <toolgroup_id> [--provider-id <provider-id>] [--provider-toolgroup-id <provider-toolgroup-id>] [--mcp-config <mcp-config>] [--args <args>]
|
llama-stack-client toolgroups register <toolgroup_id> [--provider-id <provider-id>] [--provider-toolgroup-id <provider-toolgroup-id>] [--mcp-config <mcp-config>] [--args <args>]
|
||||||
```
|
```
|
||||||
|
|
||||||
Options:
|
Optional arguments:
|
||||||
- `--provider-id`: Optional. Provider ID for the toolgroup
|
- `--provider-id`: Provider ID for the toolgroup
|
||||||
- `--provider-toolgroup-id`: Optional. Provider's toolgroup ID
|
- `--provider-toolgroup-id`: Provider's toolgroup ID
|
||||||
- `--mcp-config`: Optional. JSON configuration for the MCP endpoint
|
- `--mcp-config`: JSON configuration for the MCP endpoint
|
||||||
- `--args`: Optional. JSON arguments for the toolgroup
|
- `--args`: JSON arguments for the toolgroup
|
||||||
|
|
||||||
### `llama-stack-client toolgroups unregister`
|
### `llama-stack-client toolgroups unregister`
|
||||||
```bash
|
```bash
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue