llama-stack-mirror/llama_stack/providers/utils/inference
Matthew Farrellee cb4b677552 fix: allowed_models config did not filter models (#4030)
# What does this PR do?

closes #4022

## Test Plan

ci w/ new tests

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
(cherry picked from commit 1263448de2)
2025-11-24 18:12:44 +00:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py fix(inference): enable routing of models with provider_data alone (backport #3928) (#4142) 2025-11-12 13:41:27 -08:00
inference_store.py fix: harden storage semantics (backport #4118) (#4138) 2025-11-12 13:01:21 -08:00
litellm_openai_mixin.py feat(api)!: support extra_body to embeddings and vector_stores APIs (#3794) 2025-10-12 19:01:52 -07:00
model_registry.py fix: allowed_models config did not filter models (#4030) 2025-11-24 18:12:44 +00:00
openai_compat.py fix: Update watsonx.ai provider to use LiteLLM mixin and list all models (#3674) 2025-10-08 07:29:43 -04:00
openai_mixin.py fix: allowed_models config did not filter models (#4030) 2025-11-24 18:12:44 +00:00
prompt_adapter.py chore!: Safety api refactoring to use OpenAIMessageParam (#3796) 2025-10-12 08:01:00 -07:00