forked from phoenix-oss/llama-stack-mirror
test: Split inference tests to text and vision (#1008)
# What does this PR do? This PR splits the inference tests into text and vision to make testing on vLLM provider easier as mentioned in https://github.com/meta-llama/llama-stack/pull/951 since serving multiple models (e.g. Llama-3.2-11B-Vision-Instruct and Llama-3.1-8B-Instruct) on a single port using the OpenAI API is [not supported yet](https://docs.vllm.ai/en/v0.5.5/serving/faq.html) so it's a bit tricky to test both at the same time. ## Test Plan All previously passing tests related to text still pass: `LLAMA_STACK_BASE_URL=http://localhost:5002 pytest -v tests/client-sdk/inference/test_text_inference.py` All vision tests passed via `LLAMA_STACK_BASE_URL=http://localhost:5002 pytest -v tests/client-sdk/inference/test_vision_inference.py`. Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
This commit is contained in:
parent
a9950ce806
commit
c97e05f75e
4 changed files with 137 additions and 122 deletions
|
@ -4,18 +4,18 @@ You can run llama stack integration tests on either a Llama Stack Library or a L
|
|||
To test on a Llama Stack library with certain configuration, run
|
||||
```bash
|
||||
LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml
|
||||
pytest -s -v tests/client-sdk/inference/test_inference.py
|
||||
pytest -s -v tests/client-sdk/inference/
|
||||
```
|
||||
or just the template name
|
||||
```bash
|
||||
LLAMA_STACK_CONFIG=together
|
||||
pytest -s -v tests/client-sdk/inference/test_inference.py
|
||||
pytest -s -v tests/client-sdk/inference/
|
||||
```
|
||||
|
||||
To test on a Llama Stack endpoint, run
|
||||
```bash
|
||||
LLAMA_STACK_BASE_URL=http//localhost:8089
|
||||
pytest -s -v tests/client-sdk/inference/test_inference.py
|
||||
pytest -s -v tests/client-sdk/inference
|
||||
```
|
||||
|
||||
## Report Generation
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue