llama-stack-mirror/llama_stack/providers/utils/inference
Matthew Farrellee e892a3f7f4
feat: add refresh_models support to inference adapters (default: false) (#3719)
# What does this PR do?

inference adapters can now configure `refresh_models: bool` to control
periodic model listing from their providers

BREAKING CHANGE: together inference adapter default changed. previously
always refreshed, now follows config.

addresses "models: refresh" on #3517

## Test Plan

ci w/ new tests
2025-10-07 15:19:56 +02:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py chore(api): remove deprecated embeddings impls (#3301) 2025-09-29 14:45:09 -04:00
inference_store.py chore: fix/add logging categories (#3658) 2025-10-02 13:10:13 -07:00
litellm_openai_mixin.py chore: remove deprecated inference.chat_completion implementations (#3654) 2025-10-03 07:55:34 -04:00
model_registry.py feat: add refresh_models support to inference adapters (default: false) (#3719) 2025-10-07 15:19:56 +02:00
openai_compat.py feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
openai_mixin.py feat: add refresh_models support to inference adapters (default: false) (#3719) 2025-10-07 15:19:56 +02:00
prompt_adapter.py chore: remove /v1/inference/completion and implementations (#3622) 2025-10-01 11:36:53 -04:00