mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
Add vLLM provider support to integration test CI workflows alongside existing Ollama support. Configure provider-specific test execution where vLLM runs only inference specific tests (excluding vision tests) while Ollama continues to run the full test suite. This enables comprehensive CI testing of both inference providers but keeps the vLLM footprint small, this can be expanded later if it proves to not be too disruptive. Signed-off-by: Derek Higgins <derekh@redhat.com> |
||
---|---|---|
.. | ||
actions | ||
ISSUE_TEMPLATE | ||
workflows | ||
CODEOWNERS | ||
dependabot.yml | ||
PULL_REQUEST_TEMPLATE.md | ||
TRIAGERS.md |