llama-stack-mirror/llama_stack/providers/utils/inference
Ashwin Bharambe f70aa99c97
fix(models)!: always prefix models with provider_id when registering (#3822)
**!!BREAKING CHANGE!!**

The lookup is also straightforward -- we always look for this identifier
and don't try to find a match for something without the provider_id
prefix.

Note that, this ideally means we need to update the `register_model()`
API also (we should kill "identifier" from there) but I am not doing
that as part of this PR.

## Test Plan

Existing unit tests
2025-10-16 06:47:39 -07:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py refactor: replace default all-MiniLM-L6-v2 embedding model by nomic-embed-text-v1.5 in Llama Stack (#3183) 2025-10-14 10:44:20 -04:00
inference_store.py chore: fix/add logging categories (#3658) 2025-10-02 13:10:13 -07:00
litellm_openai_mixin.py feat(api)!: support extra_body to embeddings and vector_stores APIs (#3794) 2025-10-12 19:01:52 -07:00
model_registry.py feat: use SecretStr for inference provider auth credentials (#3724) 2025-10-10 07:32:50 -07:00
openai_compat.py fix: Update watsonx.ai provider to use LiteLLM mixin and list all models (#3674) 2025-10-08 07:29:43 -04:00
openai_mixin.py fix(models)!: always prefix models with provider_id when registering (#3822) 2025-10-16 06:47:39 -07:00
prompt_adapter.py chore!: Safety api refactoring to use OpenAIMessageParam (#3796) 2025-10-12 08:01:00 -07:00