mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 20:14:13 +00:00
ci: integrate vLLM inference tests with GitHub Actions workflows
Add vLLM provider support to integration test CI workflows alongside existing Ollama support. Configure provider-specific test execution where vLLM runs only inference specific tests (excluding vision tests) while Ollama continues to run the full test suite. This enables comprehensive CI testing of both inference providers but keeps the vLLM footprint small, this can be expanded later if it proves to not be too disruptive. Signed-off-by: Derek Higgins <derekh@redhat.com>
This commit is contained in:
parent
666d6a6fc0
commit
746e9c91a9
3 changed files with 27 additions and 7 deletions
|
@ -147,6 +147,11 @@ SUITE_DEFINITIONS: dict[str, Suite] = {
|
|||
roots=base_roots,
|
||||
default_setup="ollama",
|
||||
),
|
||||
"base-vllm-subset": Suite(
|
||||
name="base-vllm-subset",
|
||||
roots=["tests/integration/inference"],
|
||||
default_setup="vllm",
|
||||
),
|
||||
"responses": Suite(
|
||||
name="responses",
|
||||
roots=["tests/integration/responses"],
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue