forked from phoenix-oss/llama-stack-mirror
# What does this PR do? This PR splits the inference tests into text and vision to make testing on vLLM provider easier as mentioned in https://github.com/meta-llama/llama-stack/pull/951 since serving multiple models (e.g. Llama-3.2-11B-Vision-Instruct and Llama-3.1-8B-Instruct) on a single port using the OpenAI API is [not supported yet](https://docs.vllm.ai/en/v0.5.5/serving/faq.html) so it's a bit tricky to test both at the same time. ## Test Plan All previously passing tests related to text still pass: `LLAMA_STACK_BASE_URL=http://localhost:5002 pytest -v tests/client-sdk/inference/test_text_inference.py` All vision tests passed via `LLAMA_STACK_BASE_URL=http://localhost:5002 pytest -v tests/client-sdk/inference/test_vision_inference.py`. Signed-off-by: Yuan Tang <terrytangyuan@gmail.com> |
||
---|---|---|
.. | ||
ISSUE_TEMPLATE | ||
workflows | ||
CODEOWNERS | ||
PULL_REQUEST_TEMPLATE.md |