llama-stack/llama_stack/templates
Ashwin Bharambe 314ee09ae3
chore: move all Llama Stack types from llama-models to llama-stack (#1098)
llama-models should have extremely minimal cruft. Its sole purpose
should be didactic -- show the simplest implementation of the llama
models and document the prompt formats, etc.

This PR is the complement to
https://github.com/meta-llama/llama-models/pull/279

## Test Plan

Ensure all `llama` CLI `model` sub-commands work:

```bash
llama model list
llama model download --model-id ...
llama model prompt-format -m ...
```

Ran tests:
```bash
cd tests/client-sdk
LLAMA_STACK_CONFIG=fireworks pytest -s -v inference/
LLAMA_STACK_CONFIG=fireworks pytest -s -v vector_io/
LLAMA_STACK_CONFIG=fireworks pytest -s -v agents/
```

Create a fresh venv `uv venv && source .venv/bin/activate` and run
`llama stack build --template fireworks --image-type venv` followed by
`llama stack run together --image-type venv` <-- the server runs

Also checked that the OpenAPI generator can run and there is no change
in the generated files as a result.

```bash
cd docs/openapi_generator
sh run_openapi_generator.sh
```
2025-02-14 09:10:59 -08:00
..
bedrock chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
cerebras chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
dell fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
experimental-post-training fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
fireworks chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
hf-endpoint fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
hf-serverless fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
meta-reference-gpu fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
meta-reference-quantized-gpu fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
nvidia chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
ollama fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
remote-vllm fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
sambanova chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
tgi fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
together chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
vllm-gpu fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
template.py build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00