mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
chore: rename run.yaml to config.yaml
since we only have one config, lets call it config.yaml! this should be treated as the source of truth for starting a stack change all file names, tests, etc. Signed-off-by: Charlie Doern <cdoern@redhat.com>
This commit is contained in:
parent
4a3f9151e3
commit
0cd98c957e
64 changed files with 147 additions and 145 deletions
|
|
@ -85,7 +85,7 @@ Published on: 2025-07-28T23:35:23Z
|
|||
## Highlights
|
||||
|
||||
* Automatic model registration for self-hosted providers (ollama and vllm currently). No need for `INFERENCE_MODEL` environment variables which need to be updated, etc.
|
||||
* Much simplified starter distribution. Most `ENABLE_` env variables are now gone. When you set `VLLM_URL`, the `vllm` provider is auto-enabled. Similar for `MILVUS_URL`, `PGVECTOR_DB`, etc. Check the [run.yaml](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/templates/starter/run.yaml) for more details.
|
||||
* Much simplified starter distribution. Most `ENABLE_` env variables are now gone. When you set `VLLM_URL`, the `vllm` provider is auto-enabled. Similar for `MILVUS_URL`, `PGVECTOR_DB`, etc. Check the [config.yaml](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/templates/starter/config.yaml) for more details.
|
||||
* All tests migrated to pytest now (thanks @Elbehery)
|
||||
* DPO implementation in the post-training provider (thanks @Nehanth)
|
||||
* (Huge!) Support for external APIs and providers thereof (thanks @leseb, @cdoern and others). This is a really big deal -- you can now add more APIs completely out of tree and experiment with them before (optionally) wanting to contribute back.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue