feat: consolidate most distros into "starter" (#2516)

# What does this PR do?

* Removes a bunch of distros
* Removed distros were added into the "starter" distribution
* Doc for "starter" has been added
* Partially reverts https://github.com/meta-llama/llama-stack/pull/2482
  since inference providers are disabled by default and can be turned on
  manually via env variable.
* Disables safety in starter distro

Closes: https://github.com/meta-llama/llama-stack/issues/2502.

~Needs: https://github.com/meta-llama/llama-stack/pull/2482 for Ollama
to work properly in the CI.~

TODO:

- [ ] We can only update `install.sh` when we get a new release.
- [x] Update providers documentation
- [ ] Update notebooks to reference starter instead of ollama

Signed-off-by: Sébastien Han <seb@redhat.com>
This commit is contained in:
Sébastien Han 2025-07-04 15:58:03 +02:00 committed by GitHub
parent f77d4d91f5
commit c4349f532b
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
132 changed files with 1009 additions and 10845 deletions

View file

@ -13,7 +13,7 @@ Here are the most important options:
- **`server:<config>`** - automatically start a server with the given config (e.g., `server:fireworks`). This provides one-step testing by auto-starting the server if the port is available, or reusing an existing server if already running.
- **`server:<config>:<port>`** - same as above but with a custom port (e.g., `server:together:8322`)
- a URL which points to a Llama Stack distribution server
- a template (e.g., `fireworks`, `together`) or a path to a `run.yaml` file
- a template (e.g., `starter`) or a path to a `run.yaml` file
- a comma-separated list of api=provider pairs, e.g. `inference=fireworks,safety=llama-guard,agents=meta-reference`. This is most useful for testing a single API surface.
- `--env`: set environment variables, e.g. --env KEY=value. this is a utility option to set environment variables required by various providers.
@ -61,28 +61,29 @@ pytest -s -v tests/integration/inference/ tests/integration/safety/ tests/integr
### Testing with Library Client
Run all text inference tests with the `together` distribution:
Run all text inference tests with the `starter` distribution using the `together` provider:
```bash
pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=together \
ENABLE_TOGETHER=together pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=starter \
--text-model=meta-llama/Llama-3.1-8B-Instruct
```
Run all text inference tests with the `together` distribution and `meta-llama/Llama-3.1-8B-Instruct`:
Run all text inference tests with the `starter` distribution using the `together` provider and `meta-llama/Llama-3.1-8B-Instruct`:
```bash
pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=together \
ENABLE_TOGETHER=together pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=starter \
--text-model=meta-llama/Llama-3.1-8B-Instruct
```
Running all inference tests for a number of models:
Running all inference tests for a number of models using the `together` provider:
```bash
TEXT_MODELS=meta-llama/Llama-3.1-8B-Instruct,meta-llama/Llama-3.1-70B-Instruct
VISION_MODELS=meta-llama/Llama-3.2-11B-Vision-Instruct
EMBEDDING_MODELS=all-MiniLM-L6-v2
ENABLE_TOGETHER=together
export TOGETHER_API_KEY=<together_api_key>
pytest -s -v tests/integration/inference/ \

View file

@ -65,7 +65,7 @@ def pytest_addoption(parser):
help=textwrap.dedent(
"""
a 'pointer' to the stack. this can be either be:
(a) a template name like `fireworks`, or
(a) a template name like `starter`, or
(b) a path to a run.yaml file, or
(c) an adhoc config spec, e.g. `inference=fireworks,safety=llama-guard,agents=meta-reference`
"""

View file

@ -10,6 +10,7 @@ import socket
import subprocess
import tempfile
import time
from urllib.parse import urlparse
import pytest
import requests
@ -215,12 +216,17 @@ def llama_stack_client(request, provider_data):
provider_data=provider_data,
)
# check if this looks like a URL
if config.startswith("http") or "//" in config:
return LlamaStackClient(
base_url=config,
provider_data=provider_data,
)
# check if this looks like a URL using proper URL parsing
try:
parsed_url = urlparse(config)
if parsed_url.scheme and parsed_url.netloc:
return LlamaStackClient(
base_url=config,
provider_data=provider_data,
)
except Exception:
# If URL parsing fails, treat as non-URL config
pass
if "=" in config:
run_config = run_config_from_adhoc_config_spec(config)

View file

@ -45,7 +45,7 @@ def skip_if_model_doesnt_support_suffix(client_with_models, model_id):
# To test `fim` ( fill in the middle ) completion, we need to use a model that supports suffix.
# Use this to specifically test this API functionality.
# pytest -sv --stack-config="inference=ollama" \
# pytest -sv --stack-config="inference=starter" \
# tests/integration/inference/test_openai_completion.py \
# --text-model qwen2.5-coder:1.5b \
# -k test_openai_completion_non_streaming_suffix