update doc for testing

This commit is contained in:
Sixian Yi 2025-01-22 22:59:49 -08:00
parent 3d14a3d46f
commit 3737542c46
2 changed files with 25 additions and 3 deletions

View file

@ -13,11 +13,12 @@ This guide contains references to walk you through adding a new API provider.
## Testing your newly added API providers
1. Start with an _integration test_ for your provider. That means we will instantiate the real provider, pass it real configuration and if it is a remote service, we will actually hit the remote service. We **strongly** discourage mocking for these tests at the provider level. Llama Stack is first and foremost about integration so we need to make sure stuff works end-to-end. See {repopath}`llama_stack/providers/tests/inference/test_text_inference.py` for an example.
1. Start with an _integration test_ for your provider. That means we will instantiate the real provider, pass it real configuration and if it is a remote service, we will actually hit the remote service. We **strongly** discourage mocking for these tests at the provider level. Llama Stack is first and foremost about integration so we need to make sure stuff works end-to-end. See {repopath}`tests/client-sdk` for an example.
2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in `tests/` but they aren't well-supported so far.
3. Test with a client-server Llama Stack setup. (a) Start a Llama Stack server with your own distribution which includes the new provider. (b) Send a client request to the server. See `llama_stack/apis/<api>/client.py` for how this is done. These client scripts can serve as lightweight tests.
2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in {repopath}`llama_stack/providers/tests/inference/test_text_inference.py`.
3. Test with a client-server Llama Stack setup. (a) Start a Llama Stack server with your own distribution which includes the new provider. (b) Send a client request to the server. These client scripts can serve as lightweight tests.
You can find more complex client scripts [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main) repo. Note down which scripts works and do not work with your distribution.

View file

@ -0,0 +1,21 @@
# Llama Stack Integration Tests
You can run llama stack integration tests on either a Llama Stack Library or a Llama Stack endpoint.
To test on a Llama Stack library with certain configuration, run
```bash
LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml
pytest -s -v tests/client-sdk/inference/test_inference.py
```
To test on a Llama Stack endpoint, run
```bash
LLAMA_STACK_BASE_URL=http//localhost:8089
pytest -s -v tests/client-sdk/inference/test_inference.py
```
## Common options
Depending on the API, there are custom options enabled
- For tests in `inference/` and `agents/, we support `--inference-model` (to be used in text inference tests) and `--vision-inference-model` (only used in image inference tests) overrides
- For tests in `vector_io/`, we support `--embedding-model` override
- For tests in `safety/`, we support `--safety-shield` override