feat: consolidate most distros into "starter"

* Removes a bunch of distros
* Removed distros were added into the "starter" distribution
* Doc for "starter" has been added
* Partially reverts https://github.com/meta-llama/llama-stack/pull/2482
  since inference providers are disabled by default and can be turned on
  manually via env variable.
* Disables safety in starter distro

Closes: #2502
Signed-off-by: Sébastien Han <seb@redhat.com>
This commit is contained in:
Sébastien Han 2025-06-25 16:09:41 +02:00
parent f1c62e0af0
commit 6d8e2c6212
No known key found for this signature in database
132 changed files with 1009 additions and 10845 deletions

View file

@ -43,7 +43,7 @@ jobs:
- name: Build Llama Stack
run: |
uv run llama stack build --template ollama --image-type venv
uv run llama stack build --template starter --image-type venv
- name: Check Storage and Memory Available Before Tests
if: ${{ always() }}
@ -54,16 +54,18 @@ jobs:
- name: Run Integration Tests
env:
INFERENCE_MODEL: "meta-llama/Llama-3.2-3B-Instruct"
OLLAMA_INFERENCE_MODEL: "meta-llama/Llama-3.2-3B-Instruct" # for library tests
ENABLE_OLLAMA: "ollama" # for library tests
OLLAMA_URL: "http://0.0.0.0:11434"
run: |
if [ "${{ matrix.client-type }}" == "library" ]; then
stack_config="ollama"
stack_config="starter"
else
stack_config="server:ollama"
stack_config="server:starter"
fi
uv run pytest -s -v tests/integration/${{ matrix.test-type }} --stack-config=${stack_config} \
-k "not(builtin_tool or safety_with_image or code_interpreter or test_rag)" \
--text-model="meta-llama/Llama-3.2-3B-Instruct" \
--text-model="ollama/meta-llama/Llama-3.2-3B-Instruct" \
--embedding-model=all-MiniLM-L6-v2 \
--color=yes \
--capture=tee-sys | tee pytest-${{ matrix.test-type }}.log