From 63df186e8873a810a37b63dd240be806eac0d913 Mon Sep 17 00:00:00 2001 From: Hardik Shah Date: Fri, 24 Jan 2025 13:12:27 -0800 Subject: [PATCH] update README --- tests/client-sdk/README.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/tests/client-sdk/README.md b/tests/client-sdk/README.md index 2edf6d3c8..13142d46f 100644 --- a/tests/client-sdk/README.md +++ b/tests/client-sdk/README.md @@ -6,6 +6,11 @@ To test on a Llama Stack library with certain configuration, run LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml pytest -s -v tests/client-sdk/inference/test_inference.py ``` +or just the template name +```bash +LLAMA_STACK_CONFIG=together +pytest -s -v tests/client-sdk/inference/test_inference.py +``` To test on a Llama Stack endpoint, run ```bash @@ -13,9 +18,17 @@ LLAMA_STACK_BASE_URL=http//localhost:8089 pytest -s -v tests/client-sdk/inference/test_inference.py ``` +## Report Generation + +To generate a report, run with `--report` option +```bash +LLAMA_STACK_CONFIG=together pytest -s -v report.md tests/client-sdk/ --report +``` ## Common options Depending on the API, there are custom options enabled - For tests in `inference/` and `agents/, we support `--inference-model` (to be used in text inference tests) and `--vision-inference-model` (only used in image inference tests) overrides - For tests in `vector_io/`, we support `--embedding-model` override - For tests in `safety/`, we support `--safety-shield` override +- The param can be `--report` or `--report ` +If path is not provided, we do a best effort to infer based on the config / template name. For url endpoints, path is required.