llama-stack-mirror/llama_stack/providers/remote
Matthew Farrellee bf63470c22 feat: implement dynamic model detection support for inference providers using litellm
This enhancement allows inference providers using LiteLLMOpenAIMixin to validate
model availability against LiteLLM's official provider model listings, improving
reliability and user experience when working with different AI service providers.

- Add litellm_provider_name parameter to LiteLLMOpenAIMixin constructor
- Add check_model_availability method to LiteLLMOpenAIMixin using litellm.models_by_provider
- Update Gemini, Groq, and SambaNova inference adapters to pass litellm_provider_name
2025-07-24 09:49:32 -04:00
..
agents test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
eval refactor(env)!: enhanced environment variable substitution (#2490) 2025-06-26 08:20:08 +05:30
inference feat: implement dynamic model detection support for inference providers using litellm 2025-07-24 09:49:32 -04:00
post_training fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
safety fix: sambanova shields and model validation (#2693) 2025-07-11 16:29:15 -04:00
tool_runtime fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
vector_io chore: Added openai compatible vector io endpoints for chromadb (#2489) 2025-07-23 13:51:58 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00