llama-stack-mirror/llama_stack/core
Francisco Arceo ef4bc70bbe
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Python Package Build Test / build (3.12) (push) Failing after 1s
Python Package Build Test / build (3.13) (push) Failing after 1s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 3s
Vector IO Integration Tests / test-matrix (push) Failing after 4s
Unit Tests / unit-tests (3.12) (push) Failing after 4s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Unit Tests / unit-tests (3.13) (push) Failing after 5s
API Conformance Tests / check-schema-compatibility (push) Successful in 11s
UI Tests / ui-tests (22) (push) Successful in 40s
Pre-commit / pre-commit (push) Successful in 1m28s
feat: Enable setting a default embedding model in the stack (#3803)
# What does this PR do?

Enables automatic embedding model detection for vector stores and by
using a `default_configured` boolean that can be defined in the
`run.yaml`.

<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->

## Test Plan
- Unit tests
- Integration tests
- Simple example below:

Spin up the stack:
```bash
uv run llama stack build --distro starter --image-type venv --run
```
Then test with OpenAI's client:
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none")
vs = client.vector_stores.create()
```
Previously you needed:

```python
vs = client.vector_stores.create(
    extra_body={
        "embedding_model": "sentence-transformers/all-MiniLM-L6-v2",
        "embedding_dimension": 384,
    }
)
```

The `extra_body` is now unnecessary.

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-10-14 18:25:13 -07:00
..
access_control chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
conversations feat: Add support for Conversations in Responses API (#3743) 2025-10-10 11:57:40 -07:00
prompts feat: Adding OpenAI Prompts API (#3319) 2025-09-08 11:05:13 -04:00
routers feat: Enable setting a default embedding model in the stack (#3803) 2025-10-14 18:25:13 -07:00
routing_tables chore!: BREAKING CHANGE removing VectorDB APIs (#3774) 2025-10-11 14:07:08 -07:00
server fix: replace python-jose with PyJWT for JWT handling (#3756) 2025-10-14 09:35:48 +02:00
store feat: make object registration idempotent (#3752) 2025-10-09 17:04:28 -07:00
ui chore!: BREAKING CHANGE removing VectorDB APIs (#3774) 2025-10-11 14:07:08 -07:00
utils refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
build.py feat(distro): no huggingface provider for starter (#3258) 2025-08-26 14:06:36 -07:00
build_container.sh chore: use uvicorn to start llama stack server everywhere (#3625) 2025-10-06 14:27:40 +02:00
build_venv.sh fix(ci, tests): ensure uv environments in CI are kosher, record tests (#3193) 2025-08-18 17:02:24 -07:00
client.py feat: introduce API leveling, post_training, eval to v1alpha (#3449) 2025-09-26 16:18:07 +02:00
common.sh refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
configure.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
datatypes.py feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
distribution.py chore!: BREAKING CHANGE removing VectorDB APIs (#3774) 2025-10-11 14:07:08 -07:00
external.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
id_generation.py feat(tests): make inference_recorder into api_recorder (include tool_invoke) (#3403) 2025-10-09 14:27:51 -07:00
inspect.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
library_client.py feat: Enable setting a default embedding model in the stack (#3803) 2025-10-14 18:25:13 -07:00
providers.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
request_headers.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
resolver.py chore!: BREAKING CHANGE removing VectorDB APIs (#3774) 2025-10-11 14:07:08 -07:00
stack.py feat: Enable setting a default embedding model in the stack (#3803) 2025-10-14 18:25:13 -07:00
start_stack.sh chore!: remove --env from llama stack run (#3711) 2025-10-07 20:58:15 -07:00
testing_context.py feat(tests): make inference_recorder into api_recorder (include tool_invoke) (#3403) 2025-10-09 14:27:51 -07:00