diff --git a/tests/client-sdk/README.md b/tests/client-sdk/README.md index 2edf6d3c8..13142d46f 100644 --- a/tests/client-sdk/README.md +++ b/tests/client-sdk/README.md @@ -6,6 +6,11 @@ To test on a Llama Stack library with certain configuration, run LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml pytest -s -v tests/client-sdk/inference/test_inference.py ``` +or just the template name +```bash +LLAMA_STACK_CONFIG=together +pytest -s -v tests/client-sdk/inference/test_inference.py +``` To test on a Llama Stack endpoint, run ```bash @@ -13,9 +18,17 @@ LLAMA_STACK_BASE_URL=http//localhost:8089 pytest -s -v tests/client-sdk/inference/test_inference.py ``` +## Report Generation + +To generate a report, run with `--report` option +```bash +LLAMA_STACK_CONFIG=together pytest -s -v report.md tests/client-sdk/ --report +``` ## Common options Depending on the API, there are custom options enabled - For tests in `inference/` and `agents/, we support `--inference-model` (to be used in text inference tests) and `--vision-inference-model` (only used in image inference tests) overrides - For tests in `vector_io/`, we support `--embedding-model` override - For tests in `safety/`, we support `--safety-shield` override +- The param can be `--report` or `--report ` +If path is not provided, we do a best effort to infer based on the config / template name. For url endpoints, path is required.