llama-stack-mirror/tests/unit/providers
Francisco Arceo 48581bf651
chore: Updating how default embedding model is set in stack (#3818)
# What does this PR do?

Refactor setting default vector store provider and embedding model to
use an optional `vector_stores` config in the `StackRunConfig` and clean
up code to do so (had to add back in some pieces of VectorDB). Also
added remote Qdrant and Weaviate to starter distro (based on other PR
where inference providers were added for UX).

New config is simply (default for Starter distro):

```yaml
vector_stores:
  default_provider_id: faiss
  default_embedding_model:
    provider_id: sentence-transformers
    model_id: nomic-ai/nomic-embed-text-v1.5
```

## Test Plan
CI and Unit tests.

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-10-20 14:22:45 -07:00
..
agent feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
agents feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
batches feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
files feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
inference fix(tests): reduce some test noise (#3825) 2025-10-16 09:52:16 -07:00
inline feat: Add responses and safety impl extra_body (#3781) 2025-10-15 15:01:37 -07:00
nvidia fix(tests): reduce some test noise (#3825) 2025-10-16 09:52:16 -07:00
utils fix(openai_mixin): no yelling for model listing if API keys are not provided (#3826) 2025-10-16 10:12:13 -07:00
vector_io chore: Updating how default embedding model is set in stack (#3818) 2025-10-20 14:22:45 -07:00
test_bedrock.py fix: AWS Bedrock inference profile ID conversion for region-specific endpoints (#3386) 2025-09-11 11:41:53 +02:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00