llama-stack/docs/source/references/llama_stack_client_cli_reference.md
Xi Yan 8b655e3cd2
fix!: update eval-tasks -> benchmarks (#1032)
# What does this PR do?

- Update `/eval-tasks` to `/benchmarks`
- ⚠️ Remove differentiation between `app` v.s. `benchmark` eval task
config. Now we only have `BenchmarkConfig`. The overloaded `benchmark`
is confusing and do not add any value. Backward compatibility is being
kept as the "type" is not being used anywhere.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
- This change is backward compatible 
- Run notebook test with

```
pytest -v -s --nbval-lax ./docs/getting_started.ipynb
pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb
```

<img width="846" alt="image"
src="https://github.com/user-attachments/assets/d2fc06a7-593a-444f-bc1f-10ab9b0c843d"
/>



[//]: # (## Documentation)
[//]: # (- [ ] Added a Changelog entry if the change is significant)

---------

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
Signed-off-by: Ben Browning <bbrownin@redhat.com>
Signed-off-by: Sébastien Han <seb@redhat.com>
Signed-off-by: reidliu <reid201711@gmail.com>
Co-authored-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
Co-authored-by: Ben Browning <ben324@gmail.com>
Co-authored-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu <reid201711@gmail.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
2025-02-13 16:40:58 -08:00

9.6 KiB

llama (client-side) CLI Reference

The llama-stack-client CLI allows you to query information about the distribution.

Basic Commands

llama-stack-client

$ llama-stack-client -h

usage: llama-stack-client [-h] {models,memory_banks,shields} ...

Welcome to the LlamaStackClient CLI

options:
  -h, --help            show this help message and exit

subcommands:
  {models,memory_banks,shields}

llama-stack-client configure

$ llama-stack-client configure
> Enter the host name of the Llama Stack distribution server: localhost
> Enter the port number of the Llama Stack distribution server: 8321
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:8321

llama-stack-client providers list

$ llama-stack-client providers list
+-----------+----------------+-----------------+
| API       | Provider ID    | Provider Type   |
+===========+================+=================+
| scoring   | meta0          | meta-reference  |
+-----------+----------------+-----------------+
| datasetio | meta0          | meta-reference  |
+-----------+----------------+-----------------+
| inference | tgi0           | remote::tgi     |
+-----------+----------------+-----------------+
| memory    | meta-reference | meta-reference  |
+-----------+----------------+-----------------+
| agents    | meta-reference | meta-reference  |
+-----------+----------------+-----------------+
| telemetry | meta-reference | meta-reference  |
+-----------+----------------+-----------------+
| safety    | meta-reference | meta-reference  |
+-----------+----------------+-----------------+

Model Management

llama-stack-client models list

$ llama-stack-client models list
+----------------------+----------------------+---------------+----------------------------------------------------------+
| identifier           | llama_model          | provider_id   | metadata                                                 |
+======================+======================+===============+==========================================================+
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | tgi0          | {'huggingface_repo': 'meta-llama/Llama-3.1-8B-Instruct'} |
+----------------------+----------------------+---------------+----------------------------------------------------------+

llama-stack-client models get

$ llama-stack-client models get Llama3.1-8B-Instruct
+----------------------+----------------------+----------------------------------------------------------+---------------+
| identifier           | llama_model          | metadata                                                 | provider_id   |
+======================+======================+==========================================================+===============+
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | {'huggingface_repo': 'meta-llama/Llama-3.1-8B-Instruct'} | tgi0          |
+----------------------+----------------------+----------------------------------------------------------+---------------+
$ llama-stack-client models get Random-Model

Model RandomModel is not found at distribution endpoint host:port. Please ensure endpoint is serving specified model.

llama-stack-client models register

$ llama-stack-client models register <model_id> [--provider-id <provider_id>] [--provider-model-id <provider_model_id>] [--metadata <metadata>]

llama-stack-client models update

$ llama-stack-client models update <model_id> [--provider-id <provider_id>] [--provider-model-id <provider_model_id>] [--metadata <metadata>]

llama-stack-client models delete

$ llama-stack-client models delete <model_id>

Vector DB Management

llama-stack-client vector_dbs list

$ llama-stack-client vector_dbs list
+--------------+----------------+---------------------+---------------+------------------------+
| identifier   | provider_id    | provider_resource_id| vector_db_type| params                |
+==============+================+=====================+===============+========================+
| test_bank    | meta-reference | test_bank          | vector        | embedding_model: all-MiniLM-L6-v2
                                                                      embedding_dimension: 384|
+--------------+----------------+---------------------+---------------+------------------------+

llama-stack-client vector_dbs register

$ llama-stack-client vector_dbs register <vector-db-id> [--provider-id <provider-id>] [--provider-vector-db-id <provider-vector-db-id>] [--embedding-model <embedding-model>] [--embedding-dimension <embedding-dimension>]

Options:

  • --provider-id: Optional. Provider ID for the vector db
  • --provider-vector-db-id: Optional. Provider's vector db ID
  • --embedding-model: Optional. Embedding model to use. Default: "all-MiniLM-L6-v2"
  • --embedding-dimension: Optional. Dimension of embeddings. Default: 384

llama-stack-client vector_dbs unregister

$ llama-stack-client vector_dbs unregister <vector-db-id>

Shield Management

llama-stack-client shields list

$ llama-stack-client shields list
+--------------+----------+----------------+-------------+
| identifier   | params   | provider_id    | type        |
+==============+==========+================+=============+
| llama_guard  | {}       | meta-reference | llama_guard |
+--------------+----------+----------------+-------------+

llama-stack-client shields register

$ llama-stack-client shields register --shield-id <shield-id> [--provider-id <provider-id>] [--provider-shield-id <provider-shield-id>] [--params <params>]

Options:

  • --shield-id: Required. ID of the shield
  • --provider-id: Optional. Provider ID for the shield
  • --provider-shield-id: Optional. Provider's shield ID
  • --params: Optional. JSON configuration parameters for the shield

Eval Task Management

llama-stack-client benchmarks list

$ llama-stack-client benchmarks list

llama-stack-client benchmarks register

$ llama-stack-client benchmarks register --eval-task-id <eval-task-id> --dataset-id <dataset-id> --scoring-functions <function1> [<function2> ...] [--provider-id <provider-id>] [--provider-eval-task-id <provider-eval-task-id>] [--metadata <metadata>]

Options:

  • --eval-task-id: Required. ID of the eval task
  • --dataset-id: Required. ID of the dataset to evaluate
  • --scoring-functions: Required. One or more scoring functions to use for evaluation
  • --provider-id: Optional. Provider ID for the eval task
  • --provider-eval-task-id: Optional. Provider's eval task ID
  • --metadata: Optional. Metadata for the eval task in JSON format

Eval execution

llama-stack-client eval run-benchmark

$ llama-stack-client eval run-benchmark <eval-task-id1> [<eval-task-id2> ...] --eval-task-config <config-file> --output-dir <output-dir> [--num-examples <num>] [--visualize]

Options:

  • --eval-task-config: Required. Path to the eval task config file in JSON format
  • --output-dir: Required. Path to the directory where evaluation results will be saved
  • --num-examples: Optional. Number of examples to evaluate (useful for debugging)
  • --visualize: Optional flag. If set, visualizes evaluation results after completion

Example benchmark_config.json:

{
    "type": "benchmark",
    "eval_candidate": {
        "type": "model",
        "model": "Llama3.1-405B-Instruct",
        "sampling_params": {
            "strategy": "greedy",
        }
    }
}

llama-stack-client eval run-scoring

$ llama-stack-client eval run-scoring <eval-task-id> --eval-task-config <config-file> --output-dir <output-dir> [--num-examples <num>] [--visualize]

Options:

  • --eval-task-config: Required. Path to the eval task config file in JSON format
  • --output-dir: Required. Path to the directory where scoring results will be saved
  • --num-examples: Optional. Number of examples to evaluate (useful for debugging)
  • --visualize: Optional flag. If set, visualizes scoring results after completion

Tool Group Management

llama-stack-client toolgroups list

$ llama-stack-client toolgroups list
+---------------------------+------------------+------+---------------+
| identifier                | provider_id      | args | mcp_endpoint  |
+===========================+==================+======+===============+
| builtin::code_interpreter | code-interpreter | None | None         |
+---------------------------+------------------+------+---------------+
| builtin::rag             | rag-runtime      | None | None         |
+---------------------------+------------------+------+---------------+
| builtin::websearch       | tavily-search    | None | None         |
+---------------------------+------------------+------+---------------+

llama-stack-client toolgroups get

$ llama-stack-client toolgroups get <toolgroup_id>

Shows detailed information about a specific toolgroup. If the toolgroup is not found, displays an error message.

llama-stack-client toolgroups register

$ llama-stack-client toolgroups register <toolgroup_id> [--provider-id <provider-id>] [--provider-toolgroup-id <provider-toolgroup-id>] [--mcp-config <mcp-config>] [--args <args>]

Options:

  • --provider-id: Optional. Provider ID for the toolgroup
  • --provider-toolgroup-id: Optional. Provider's toolgroup ID
  • --mcp-config: Optional. JSON configuration for the MCP endpoint
  • --args: Optional. JSON arguments for the toolgroup

llama-stack-client toolgroups unregister

$ llama-stack-client toolgroups unregister <toolgroup_id>