mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-05 12:21:52 +00:00
Address review comments for global vector store configuration
- Remove incorrect 'Llama-Stack v2' version reference from documentation - Move MissingEmbeddingModelError to llama_stack/apis/common/errors.py - Update docstring references to point to correct exception location - Clarify default_embedding_dimension behavior (defaults to 384) - Update test imports and exception handling
This commit is contained in:
parent
f9afad99f8
commit
a368f4af40
4 changed files with 5 additions and 6 deletions
|
@ -29,7 +29,7 @@ class VectorStoreConfig(BaseModel):
|
|||
default_embedding_model
|
||||
The model *id* the stack should use when an embedding model is
|
||||
required but not supplied by the API caller. When *None* the
|
||||
router will raise a :class:`~llama_stack.errors.MissingEmbeddingModelError`.
|
||||
router will raise a :class:`~llama_stack.apis.common.errors.MissingEmbeddingModelError`.
|
||||
default_embedding_dimension
|
||||
Optional integer hint for vector dimension. Routers/providers
|
||||
may validate that the chosen model emits vectors of this size.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue