diff --git a/docs/source/contributing/new_api_provider.md b/docs/source/contributing/new_api_provider.md index 3fa875c50..f1b50da98 100644 --- a/docs/source/contributing/new_api_provider.md +++ b/docs/source/contributing/new_api_provider.md @@ -13,11 +13,12 @@ This guide contains references to walk you through adding a new API provider. ## Testing your newly added API providers -1. Start with an _integration test_ for your provider. That means we will instantiate the real provider, pass it real configuration and if it is a remote service, we will actually hit the remote service. We **strongly** discourage mocking for these tests at the provider level. Llama Stack is first and foremost about integration so we need to make sure stuff works end-to-end. See {repopath}`llama_stack/providers/tests/inference/test_text_inference.py` for an example. +1. Start with an _integration test_ for your provider. That means we will instantiate the real provider, pass it real configuration and if it is a remote service, we will actually hit the remote service. We **strongly** discourage mocking for these tests at the provider level. Llama Stack is first and foremost about integration so we need to make sure stuff works end-to-end. See {repopath}`tests/client-sdk` for an example. -2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in `tests/` but they aren't well-supported so far. -3. Test with a client-server Llama Stack setup. (a) Start a Llama Stack server with your own distribution which includes the new provider. (b) Send a client request to the server. See `llama_stack/apis//client.py` for how this is done. These client scripts can serve as lightweight tests. +2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in {repopath}`llama_stack/providers/tests/inference/test_text_inference.py`. + +3. Test with a client-server Llama Stack setup. (a) Start a Llama Stack server with your own distribution which includes the new provider. (b) Send a client request to the server. These client scripts can serve as lightweight tests. You can find more complex client scripts [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main) repo. Note down which scripts works and do not work with your distribution. diff --git a/tests/client-sdk/README.md b/tests/client-sdk/README.md new file mode 100644 index 000000000..2edf6d3c8 --- /dev/null +++ b/tests/client-sdk/README.md @@ -0,0 +1,21 @@ +# Llama Stack Integration Tests +You can run llama stack integration tests on either a Llama Stack Library or a Llama Stack endpoint. + +To test on a Llama Stack library with certain configuration, run +```bash +LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml +pytest -s -v tests/client-sdk/inference/test_inference.py +``` + +To test on a Llama Stack endpoint, run +```bash +LLAMA_STACK_BASE_URL=http//localhost:8089 +pytest -s -v tests/client-sdk/inference/test_inference.py +``` + + +## Common options +Depending on the API, there are custom options enabled +- For tests in `inference/` and `agents/, we support `--inference-model` (to be used in text inference tests) and `--vision-inference-model` (only used in image inference tests) overrides +- For tests in `vector_io/`, we support `--embedding-model` override +- For tests in `safety/`, we support `--safety-shield` override