mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
script for running client sdk tests (#895)
# What does this PR do? Create a script for running all client-sdk tests on Async Library client, with the option to generate report ## Test Plan ``` python llama_stack/scripts/run_client_sdk_tests.py --templates together fireworks --report ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
This commit is contained in:
parent
a3d8c49459
commit
531940aea9
4 changed files with 74 additions and 3 deletions
|
@ -10,7 +10,7 @@ conda_env: ollama
|
|||
apis:
|
||||
- agents
|
||||
- inference
|
||||
- memory
|
||||
- vector_io
|
||||
- safety
|
||||
- telemetry
|
||||
providers:
|
||||
|
@ -19,7 +19,7 @@ providers:
|
|||
provider_type: remote::ollama
|
||||
config:
|
||||
url: ${env.OLLAMA_URL:http://localhost:11434}
|
||||
memory:
|
||||
vector_io:
|
||||
- provider_id: faiss
|
||||
provider_type: inline::faiss
|
||||
config:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue