llama-stack-mirror/llama_stack/providers/utils
Matthew Farrellee bf63470c22 feat: implement dynamic model detection support for inference providers using litellm
This enhancement allows inference providers using LiteLLMOpenAIMixin to validate
model availability against LiteLLM's official provider model listings, improving
reliability and user experience when working with different AI service providers.

- Add litellm_provider_name parameter to LiteLLMOpenAIMixin constructor
- Add check_model_availability method to LiteLLMOpenAIMixin using litellm.models_by_provider
- Update Gemini, Groq, and SambaNova inference adapters to pass litellm_provider_name
2025-07-24 09:49:32 -04:00
..
bedrock feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30
common chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
datasetio chore(refact): move paginate_records fn outside of datasetio (#2137) 2025-05-12 10:56:14 -07:00
inference feat: implement dynamic model detection support for inference providers using litellm 2025-07-24 09:49:32 -04:00
kvstore fix: store configs (#2593) 2025-07-03 10:07:23 -07:00
memory chore: Moving vector store and vector store files helper methods to openai_vector_store_mixin (#2863) 2025-07-23 13:35:48 -04:00
responses fix: add missing argument and methods (#2550) 2025-06-30 14:55:37 +02:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
sqlstore fix: auth sql store: user is owner policy (#2674) 2025-07-10 14:40:32 -07:00
telemetry chore(test): fix flaky telemetry tests (#2815) 2025-07-22 12:30:14 -07:00
tools chore: bump python supported version to 3.12 (#2475) 2025-06-24 09:22:04 +05:30
vector_io chore: Updating chunk id generation to ensure uniqueness (#2618) 2025-07-04 10:26:35 +05:30
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
pagination.py chore(refact): move paginate_records fn outside of datasetio (#2137) 2025-05-12 10:56:14 -07:00
scheduler.py chore: bump python supported version to 3.12 (#2475) 2025-06-24 09:22:04 +05:30