forked from phoenix-oss/llama-stack-mirror
This PR moves the client-sdk tests to the api directory to better reflect their purpose and improve code organization.
1.2 KiB
1.2 KiB
Llama Stack Integration Tests
You can run llama stack integration tests on either a Llama Stack Library or a Llama Stack endpoint.
To test on a Llama Stack library with certain configuration, run
LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml pytest -s -v tests/api/inference/
or just the template name
LLAMA_STACK_CONFIG=together pytest -s -v tests/api/inference/
To test on a Llama Stack endpoint, run
LLAMA_STACK_BASE_URL=http://localhost:8089 pytest -s -v tests/api/inference
Report Generation
To generate a report, run with --report
option
LLAMA_STACK_CONFIG=together pytest -s -v report.md tests/api/ --report
Common options
Depending on the API, there are custom options enabled
- For tests in
inference/
andagents/, we support
--inference-model(to be used in text inference tests) and
--vision-inference-model` (only used in image inference tests) overrides - For tests in
vector_io/
, we support--embedding-model
override - For tests in
safety/
, we support--safety-shield
override - The param can be
--report
or--report <path>
If path is not provided, we do a best effort to infer based on the config / template name. For url endpoints, path is required.