mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-07 18:57:21 +00:00
## Summary Cherry-picks 5 critical fixes from main to the release-0.3.x branch for the v0.3.1 release, plus CI workflow updates. **Note**: This recreates the cherry-picks from the closed PR #3991, now targeting the renamed `release-0.3.x` branch (previously `release-0.3.x-maint`). ## Commits 1. **2c56a8560** - fix(context): prevent provider data leak between streaming requests (#3924) - **CRITICAL SECURITY FIX**: Prevents provider credentials from leaking between requests - Fixed import path for 0.3.0 compatibility 2. **ddd32b187** - fix(inference): enable routing of models with provider_data alone (#3928) - Enables routing for fully qualified model IDs with provider_data - Resolved merge conflicts, adapted for 0.3.0 structure 3. **f7c2973aa** - fix: Avoid BadRequestError due to invalid max_tokens (#3667) - Fixes failures with Gemini and other providers that reject max_tokens=0 - Non-breaking API change 4. **d7f9da616** - fix(responses): sync conversation before yielding terminal events in streaming (#3888) - Ensures conversation sync executes even when streaming consumers break early 5. **0ffa8658b** - fix(logging): ensure logs go to stderr, loggers obey levels (#3885) - Fixes logging infrastructure 6. **75b49cb3c** - ci: support release branches and match client branch (#3990) - Updates CI workflows to support release-X.Y.x branches - Matches client branch from llama-stack-client-python for release testing - Fixes artifact name collisions ## Adaptations for 0.3.0 - Fixed import paths: `llama_stack.core.telemetry.tracing` → `llama_stack.providers.utils.telemetry.tracing` - Fixed import paths: `llama_stack.core.telemetry.telemetry` → `llama_stack.apis.telemetry` - Changed `self.telemetry_enabled` → `self.telemetry` (0.3.0 attribute name) - Removed `rerank()` method that doesn't exist in 0.3.0 ## Testing All imports verified and tests should pass once CI is set up.
77 lines
2.9 KiB
YAML
77 lines
2.9 KiB
YAML
name: 'Setup Test Environment'
|
|
description: 'Common setup steps for integration tests including dependencies, providers, and build'
|
|
|
|
inputs:
|
|
python-version:
|
|
description: 'Python version to use'
|
|
required: true
|
|
client-version:
|
|
description: 'Client version (latest or published)'
|
|
required: true
|
|
setup:
|
|
description: 'Setup to configure (ollama, vllm, gpt, etc.)'
|
|
required: false
|
|
default: 'ollama'
|
|
suite:
|
|
description: 'Test suite to use: base, responses, vision, etc.'
|
|
required: false
|
|
default: ''
|
|
inference-mode:
|
|
description: 'Inference mode (record or replay)'
|
|
required: true
|
|
|
|
runs:
|
|
using: 'composite'
|
|
steps:
|
|
- name: Install dependencies
|
|
uses: ./.github/actions/setup-runner
|
|
with:
|
|
python-version: ${{ inputs.python-version }}
|
|
client-version: ${{ inputs.client-version }}
|
|
|
|
- name: Setup ollama
|
|
if: ${{ (inputs.setup == 'ollama' || inputs.setup == 'ollama-vision') && inputs.inference-mode == 'record' }}
|
|
uses: ./.github/actions/setup-ollama
|
|
with:
|
|
suite: ${{ inputs.suite }}
|
|
|
|
- name: Setup vllm
|
|
if: ${{ inputs.setup == 'vllm' && inputs.inference-mode == 'record' }}
|
|
uses: ./.github/actions/setup-vllm
|
|
|
|
- name: Build Llama Stack
|
|
shell: bash
|
|
run: |
|
|
# Install llama-stack-client-python based on the client-version input
|
|
if [ "${{ inputs.client-version }}" = "latest" ]; then
|
|
# Check if PR is targeting a release branch
|
|
TARGET_BRANCH="${{ github.base_ref }}"
|
|
|
|
if [[ "$TARGET_BRANCH" =~ ^release-[0-9]+\.[0-9]+\.x$ ]]; then
|
|
echo "PR targets release branch: $TARGET_BRANCH"
|
|
echo "Checking if matching branch exists in llama-stack-client-python..."
|
|
|
|
# Check if the branch exists in the client repo
|
|
if git ls-remote --exit-code --heads https://github.com/llamastack/llama-stack-client-python.git "$TARGET_BRANCH" > /dev/null 2>&1; then
|
|
echo "Installing llama-stack-client-python from matching branch: $TARGET_BRANCH"
|
|
uv pip install --force-reinstall git+https://github.com/llamastack/llama-stack-client-python.git@$TARGET_BRANCH
|
|
else
|
|
echo "::error::Branch $TARGET_BRANCH not found in llama-stack-client-python repository"
|
|
echo "::error::Please create the matching release branch in llama-stack-client-python before testing"
|
|
exit 1
|
|
fi
|
|
fi
|
|
# For main branch, client is already installed by setup-runner
|
|
fi
|
|
# For published version, client is already installed by setup-runner
|
|
|
|
echo "Building Llama Stack"
|
|
|
|
LLAMA_STACK_DIR=. \
|
|
uv run --no-sync llama stack list-deps ci-tests | xargs -L1 uv pip install
|
|
|
|
- name: Configure git for commits
|
|
shell: bash
|
|
run: |
|
|
git config --local user.email "github-actions[bot]@users.noreply.github.com"
|
|
git config --local user.name "github-actions[bot]"
|