..
anthropic
feat(providers): Groq now uses LiteLLM openai-compat ( #1303 )
2025-02-27 13:16:50 -08:00
bedrock
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
cerebras
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
cerebras_openai_compat
test: verification on provider's OAI endpoints ( #1893 )
2025-04-07 23:06:28 -07:00
databricks
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
fireworks
fix: OpenAI Completions API and Fireworks ( #1997 )
2025-04-21 11:49:12 -07:00
fireworks_openai_compat
test: verification on provider's OAI endpoints ( #1893 )
2025-04-07 23:06:28 -07:00
gemini
feat(providers): Groq now uses LiteLLM openai-compat ( #1303 )
2025-02-27 13:16:50 -08:00
groq
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
groq_openai_compat
test: verification on provider's OAI endpoints ( #1893 )
2025-04-07 23:06:28 -07:00
nvidia
feat: NVIDIA allow non-llama model registration ( #1859 )
2025-04-24 17:13:33 -07:00
ollama
feat: allow ollama to use 'latest' if available but not specified ( #1903 )
2025-04-14 09:03:54 -07:00
openai
feat(providers): Groq now uses LiteLLM openai-compat ( #1303 )
2025-02-27 13:16:50 -08:00
passthrough
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
runpod
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
sambanova
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
sambanova_openai_compat
test: verification on provider's OAI endpoints ( #1893 )
2025-04-07 23:06:28 -07:00
tgi
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
together
fix: Together provider shutdown and default to non-streaming ( #2001 )
2025-04-22 17:47:53 +02:00
together_openai_compat
test: verification on provider's OAI endpoints ( #1893 )
2025-04-07 23:06:28 -07:00
vllm
fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop ( #1969 )
2025-04-23 15:33:19 +02:00
watsonx
feat: Add watsonx inference adapter ( #1895 )
2025-04-25 11:29:21 -07:00
__init__.py
impls
-> inline
, adapters
-> remote
(#381 )
2024-11-06 14:54:05 -08:00