mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 19:04:19 +00:00
update doc for client-sdk testing (#849)
As title ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
This commit is contained in:
parent
3d14a3d46f
commit
82a28f3a24
2 changed files with 25 additions and 3 deletions
|
@ -13,11 +13,12 @@ This guide contains references to walk you through adding a new API provider.
|
||||||
|
|
||||||
## Testing your newly added API providers
|
## Testing your newly added API providers
|
||||||
|
|
||||||
1. Start with an _integration test_ for your provider. That means we will instantiate the real provider, pass it real configuration and if it is a remote service, we will actually hit the remote service. We **strongly** discourage mocking for these tests at the provider level. Llama Stack is first and foremost about integration so we need to make sure stuff works end-to-end. See {repopath}`llama_stack/providers/tests/inference/test_text_inference.py` for an example.
|
1. Start with an _integration test_ for your provider. That means we will instantiate the real provider, pass it real configuration and if it is a remote service, we will actually hit the remote service. We **strongly** discourage mocking for these tests at the provider level. Llama Stack is first and foremost about integration so we need to make sure stuff works end-to-end. See {repopath}`tests/client-sdk` for an example.
|
||||||
|
|
||||||
2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in `tests/` but they aren't well-supported so far.
|
|
||||||
|
|
||||||
3. Test with a client-server Llama Stack setup. (a) Start a Llama Stack server with your own distribution which includes the new provider. (b) Send a client request to the server. See `llama_stack/apis/<api>/client.py` for how this is done. These client scripts can serve as lightweight tests.
|
2. In addition, if you want to unit test functionality within your provider, feel free to do so. You can find some tests in {repopath}`llama_stack/providers/tests/inference/test_text_inference.py`.
|
||||||
|
|
||||||
|
3. Test with a client-server Llama Stack setup. (a) Start a Llama Stack server with your own distribution which includes the new provider. (b) Send a client request to the server. These client scripts can serve as lightweight tests.
|
||||||
|
|
||||||
You can find more complex client scripts [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main) repo. Note down which scripts works and do not work with your distribution.
|
You can find more complex client scripts [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main) repo. Note down which scripts works and do not work with your distribution.
|
||||||
|
|
||||||
|
|
21
tests/client-sdk/README.md
Normal file
21
tests/client-sdk/README.md
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
# Llama Stack Integration Tests
|
||||||
|
You can run llama stack integration tests on either a Llama Stack Library or a Llama Stack endpoint.
|
||||||
|
|
||||||
|
To test on a Llama Stack library with certain configuration, run
|
||||||
|
```bash
|
||||||
|
LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml
|
||||||
|
pytest -s -v tests/client-sdk/inference/test_inference.py
|
||||||
|
```
|
||||||
|
|
||||||
|
To test on a Llama Stack endpoint, run
|
||||||
|
```bash
|
||||||
|
LLAMA_STACK_BASE_URL=http//localhost:8089
|
||||||
|
pytest -s -v tests/client-sdk/inference/test_inference.py
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Common options
|
||||||
|
Depending on the API, there are custom options enabled
|
||||||
|
- For tests in `inference/` and `agents/, we support `--inference-model` (to be used in text inference tests) and `--vision-inference-model` (only used in image inference tests) overrides
|
||||||
|
- For tests in `vector_io/`, we support `--embedding-model` override
|
||||||
|
- For tests in `safety/`, we support `--safety-shield` override
|
Loading…
Add table
Add a link
Reference in a new issue