llama-stack-mirror/.github
Derek Higgins 0a6d122623 ci: integrate vLLM inference tests with GitHub Actions workflows
Add vLLM provider support to integration test CI workflows alongside
existing Ollama support. Configure provider-specific test execution
where vLLM runs only inference specific tests (excluding vision tests) while
Ollama continues to run the full test suite.

This enables comprehensive CI testing of both inference providers but
keeps the vLLM footprint small, this can be expanded later if it proves
to not be too disruptive.

Also updated test skips that were marked with "inline::vllm", this
should be "remote::vllm". This causes some failing log probs tests
to be skipped and should be revisted.

Signed-off-by: Derek Higgins <derekh@redhat.com>
2025-11-05 20:17:18 +00:00
..
actions ci: integrate vLLM inference tests with GitHub Actions workflows 2025-11-05 20:17:18 +00:00
ISSUE_TEMPLATE docs: fix broken links (#3647) 2025-10-01 16:48:13 -07:00
workflows ci: integrate vLLM inference tests with GitHub Actions workflows 2025-11-05 20:17:18 +00:00
CODEOWNERS chore: update CODEOWNERS (#3613) 2025-10-03 17:12:34 -07:00
dependabot.yml chore: move src/llama_stack/ui to src/llama_stack_ui (#4068) 2025-11-04 15:21:49 -08:00
mergify.yml ci: introduce Mergify bot to notify on PR conflicts (#4043) 2025-11-03 12:21:19 -08:00
PULL_REQUEST_TEMPLATE.md chore: fix visible comments in pr template (#2279) 2025-05-27 15:42:33 +02:00
TRIAGERS.md chore: update CODEOWNERS (#3613) 2025-10-03 17:12:34 -07:00