# What does this PR do? Fixes issue #3922 where `llama stack list` only showed distributions after they were run. This PR makes the command show all available distributions immediately on a fresh install. Closes #3922 ## Changes - **Updated `_get_distribution_dirs()`** to discover both built-in and built distributions: - Built-in distributions from `src/llama_stack/distributions/` (e.g., starter, nvidia, dell) - Built distributions from `~/.llama/distributions` - **Added a "Source" column** to distinguish between "built-in" and "built" distributions - **Built distributions override built-in ones** with the same name (expected behavior) - **Updated config file detection logic** to handle both naming conventions: - Built-in: `build.yaml` and `run.yaml` - Built: `{name}-build.yaml` and `{name}-run.yaml` ## Test Plan ### Unit Tests Added comprehensive unit tests in `tests/unit/distribution/test_stack_list.py`: ```bash uv run pytest tests/unit/distribution/test_stack_list.py -v ``` **Result**: ✅ All 8 tests pass - `test_builtin_distros_shown_without_running` - Verifies the core fix for issue #3922 - `test_builtin_and_built_distros_shown_together` - Ensures both types are shown - `test_built_distribution_overrides_builtin` - Tests override behavior - `test_empty_distributions` - Edge case handling - `test_config_files_detection_builtin` - Config file detection for built-in distros - `test_config_files_detection_built` - Config file detection for built distros - `test_llamastack_prefix_stripped` - Name normalization - `test_hidden_directories_ignored` - Filters hidden directories ### Manual Testing **Before the fix** (simulated with empty `~/.llama/distributions`): ```bash $ llama stack list No stacks found in ~/.llama/distributions ``` **After the fix**: ```bash $ llama stack list ┏━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓ ┃ Stack Name ┃ Source ┃ Path ┃ Build Config ┃ Run Config ┃ ┡━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩ │ ci-tests │ built-in │ /path/to/src/... │ Yes │ Yes │ │ dell │ built-in │ /path/to/src/... │ Yes │ Yes │ │ meta-reference-g… │ built-in │ /path/to/src/... │ Yes │ Yes │ │ nvidia │ built-in │ /path/to/src/... │ Yes │ Yes │ │ open-benchmark │ built-in │ /path/to/src/... │ Yes │ Yes │ │ postgres-demo │ built-in │ /path/to/src/... │ Yes │ Yes │ │ starter │ built-in │ /path/to/src/... │ Yes │ Yes │ │ starter-gpu │ built-in │ /path/to/src/... │ Yes │ Yes │ │ watsonx │ built-in │ /path/to/src/... │ Yes │ Yes │ └───────────────────┴──────────┴───────────────────┴──────────────┴────────────┘ ``` **After running a distribution**: ```bash $ llama stack run starter # Creates ~/.llama/distributions/starter $ llama stack list ┏━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓ ┃ Stack Name ┃ Source ┃ Path ┃ Build Config ┃ Run Config ┃ ┡━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩ │ ... │ built-in │ ... │ Yes │ Yes │ │ starter │ built │ ~/.llama/distri… │ No │ No │ │ ... │ built-in │ ... │ Yes │ Yes │ └───────────────────┴──────────┴───────────────────┴──────────────┴────────────┘ ``` Note how `starter` now shows as "built" and points to `~/.llama/distributions`, overriding the built-in version. ## Breaking Changes **No breaking changes** - This is a bug fix that improves user experience with minimal risk: - No programmatic parsing of output found in the codebase - Table format is clearly for human consumption - The new "Source" column helps users understand where distributions come from - The behavior change is exactly what users expect (seeing all available distributions) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| backward_compat | ||
| common | ||
| containers | ||
| external | ||
| integration | ||
| unit | ||
| __init__.py | ||
| README.md | ||
There are two obvious types of tests:
| Type | Location | Purpose |
|---|---|---|
| Unit | tests/unit/ |
Fast, isolated component testing |
| Integration | tests/integration/ |
End-to-end workflows with record-replay |
Both have their place. For unit tests, it is important to create minimal mocks and instead rely more on "fakes". Mocks are too brittle. In either case, tests must be very fast and reliable.
Record-replay for integration tests
Testing AI applications end-to-end creates some challenges:
- API costs accumulate quickly during development and CI
- Non-deterministic responses make tests unreliable
- Multiple providers require testing the same logic across different APIs
Our solution: Record real API responses once, replay them for fast, deterministic tests. This is better than mocking because AI APIs have complex response structures and streaming behavior. Mocks can miss edge cases that real APIs exhibit. A single test can exercise underlying APIs in multiple complex ways making it really hard to mock.
This gives you:
- Cost control - No repeated API calls during development
- Speed - Instant test execution with cached responses
- Reliability - Consistent results regardless of external service state
- Provider coverage - Same tests work across OpenAI, Anthropic, local models, etc.
Testing Quick Start
You can run the unit tests with:
uv run --group unit pytest -sv tests/unit/
For running integration tests, you must provide a few things:
-
A stack config. This is a pointer to a stack. You have a few ways to point to a stack:
server:<config>- automatically start a server with the given config (e.g.,server:starter). This provides one-step testing by auto-starting the server if the port is available, or reusing an existing server if already running.server:<config>:<port>- same as above but with a custom port (e.g.,server:starter:8322)- a URL which points to a Llama Stack distribution server
- a distribution name (e.g.,
starter) or a path to arun.yamlfile - a comma-separated list of api=provider pairs, e.g.
inference=fireworks,safety=llama-guard,agents=meta-reference. This is most useful for testing a single API surface.
-
Any API keys you need to use should be set in the environment, or can be passed in with the --env option.
You can run the integration tests in replay mode with:
# Run all tests with existing recordings
uv run --group test \
pytest -sv tests/integration/ --stack-config=starter
Re-recording tests
Local Re-recording (Manual Setup Required)
If you want to re-record tests locally, you can do so with:
LLAMA_STACK_TEST_INFERENCE_MODE=record \
uv run --group test \
pytest -sv tests/integration/ --stack-config=starter -k "<appropriate test name>"
This will record new API responses and overwrite the existing recordings.
You must be careful when re-recording. CI workflows assume a specific setup for running the replay-mode tests. You must re-record the tests in the same way as the CI workflows. This means
- you need Ollama running and serving some specific models.
- you are using the `starter` distribution.
Remote Re-recording (Recommended)
For easier re-recording without local setup, use the automated recording workflow:
# Record tests for specific test subdirectories
./scripts/github/schedule-record-workflow.sh --test-subdirs "agents,inference"
# Record with vision tests enabled
./scripts/github/schedule-record-workflow.sh --test-suite vision
# Record with specific provider
./scripts/github/schedule-record-workflow.sh --test-subdirs "agents" --test-provider vllm
This script:
- 🚀 Runs in GitHub Actions - no local Ollama setup required
- 🔍 Auto-detects your branch and associated PR
- 🍴 Works from forks - handles repository context automatically
- ✅ Commits recordings back to your branch
Prerequisites:
- GitHub CLI:
brew install gh && gh auth login - jq:
brew install jq - Your branch pushed to a remote
Supported providers: vllm, ollama
Next Steps
- Integration Testing Guide - Detailed usage and configuration
- Unit Testing Guide - Fast component testing