llama-stack-mirror/.github/actions
Omar Abdelwahab 761a2a0ce3 fix(ci): Use 'uv run' to execute llama command in virtual environment
The previous commit tried to run 'llama stack list-deps' directly, but the 'llama' command
wasn't in PATH yet since the virtual environment hadn't been activated.

This fix uses 'uv run llama' instead, which executes the command within the uv virtual
environment context, ensuring the llama CLI is accessible.
2025-11-12 15:51:55 -08:00
..
install-llama-stack-client fix(ci): use test.pypi as extra index for RC dependencies (#4009) 2025-10-31 12:55:43 -07:00
run-and-record-tests ci: Add vLLM support to integration testing infrastructure (with qwen) (#3545) 2025-11-06 10:36:40 +01:00
setup-ollama feat(tests): migrate to global "setups" system for test configuration (#3390) 2025-09-09 15:50:56 -07:00
setup-runner fix(ci): Use 'uv run' to execute llama command in virtual environment 2025-11-12 15:51:55 -08:00
setup-test-environment fix: harden storage semantics (#4118) 2025-11-12 10:35:39 -08:00
setup-vllm ci: Add vLLM support to integration testing infrastructure (with qwen) (#3545) 2025-11-06 10:36:40 +01:00