mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-05 20:27:35 +00:00
Replace MissingEmbeddingModelError with IBM Granite default
- Replace error with ibm-granite/granite-embedding-125m-english default - Based on issue #2418 for commercial compatibility and better UX - Update tests to verify default fallback behavior - Update documentation to reflect new precedence rules - Remove unused MissingEmbeddingModelError class - Update tip section to clarify fallback behavior Resolves review comment to use default instead of error.
This commit is contained in:
parent
380bd1bb7a
commit
8e2675f50c
4 changed files with 13 additions and 16 deletions
|
@ -29,7 +29,7 @@ class VectorStoreConfig(BaseModel):
|
|||
default_embedding_model
|
||||
The model *id* the stack should use when an embedding model is
|
||||
required but not supplied by the API caller. When *None* the
|
||||
router will raise a :class:`~llama_stack.apis.common.errors.MissingEmbeddingModelError`.
|
||||
router will fall back to the system default (ibm-granite/granite-embedding-125m-english).
|
||||
default_embedding_dimension
|
||||
Optional integer hint for vector dimension. Routers/providers
|
||||
may validate that the chosen model emits vectors of this size.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue