mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
# What does this PR do? This prevents interference from already running servers, and allows multiple concurrent integration test runs. Unleash the AIs! ## Test Plan start a LS server at port 8321 Then observe test uses port 8322: ❯ uv run --no-sync ./scripts/integration-tests.sh --stack-config server:ci-tests --inference-mode replay --setup ollama --suite base --pattern '(telemetry or safety)' === Llama Stack Integration Test Runner === Stack Config: server:ci-tests Setup: ollama Inference Mode: replay Test Suite: base Test Subdirs: Test Pattern: (telemetry or safety) Checking llama packages llama-stack 0.4.0.dev0 /Users/erichuang/projects/new_test_server llama-stack-client 0.3.0 ollama 0.6.0 === Applying Setup Environment Variables === Setting SQLITE_STORE_DIR: /var/folders/cz/vyh7y1d11xg881lsxsshnc5c0000gn/T/tmp.bKLsaVAxyU Setting stack config type: server Setting up environment variables: export OLLAMA_URL='http://0.0.0.0:11434' export SAFETY_MODEL='ollama/llama-guard3:1b' Will use port: 8322 === Starting Llama Stack Server === Waiting for Llama Stack Server to start on port 8322... ✅ Llama Stack Server started successfully |
||
|---|---|---|
| .. | ||
| github | ||
| telemetry | ||
| check-init-py.sh | ||
| check-workflows-use-hashes.sh | ||
| diagnose_recordings.py | ||
| distro_codegen.py | ||
| docker.sh | ||
| gen-changelog.py | ||
| gen-ci-docs.py | ||
| generate_prompt_format.py | ||
| get_setup_env.py | ||
| install.sh | ||
| integration-tests.sh | ||
| normalize_recordings.py | ||
| provider_codegen.py | ||
| run-ui-linter.sh | ||
| unit-tests.sh | ||
| uv-run-with-index.sh | ||