.. |
anthropic
|
feat(providers): Groq now uses LiteLLM openai-compat (#1303)
|
2025-02-27 13:16:50 -08:00 |
bedrock
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
cerebras
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
cerebras_openai_compat
|
test: verification on provider's OAI endpoints (#1893)
|
2025-04-07 23:06:28 -07:00 |
databricks
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
fireworks
|
fix: OpenAI Completions API and Fireworks (#1997)
|
2025-04-21 11:49:12 -07:00 |
fireworks_openai_compat
|
test: verification on provider's OAI endpoints (#1893)
|
2025-04-07 23:06:28 -07:00 |
gemini
|
feat(providers): Groq now uses LiteLLM openai-compat (#1303)
|
2025-02-27 13:16:50 -08:00 |
groq
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
groq_openai_compat
|
test: verification on provider's OAI endpoints (#1893)
|
2025-04-07 23:06:28 -07:00 |
llama_openai_compat
|
feat: add api.llama provider, llama-guard-4 model (#2058)
|
2025-04-29 10:07:41 -07:00 |
nvidia
|
feat: NVIDIA allow non-llama model registration (#1859)
|
2025-04-24 17:13:33 -07:00 |
ollama
|
fix: ollama still using tools with tool_choice="none" (#2047)
|
2025-04-29 10:45:28 +02:00 |
openai
|
feat(providers): Groq now uses LiteLLM openai-compat (#1303)
|
2025-02-27 13:16:50 -08:00 |
passthrough
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
runpod
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
sambanova
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
sambanova_openai_compat
|
test: verification on provider's OAI endpoints (#1893)
|
2025-04-07 23:06:28 -07:00 |
tgi
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
together
|
fix: Together provider shutdown and default to non-streaming (#2001)
|
2025-04-22 17:47:53 +02:00 |
together_openai_compat
|
test: verification on provider's OAI endpoints (#1893)
|
2025-04-07 23:06:28 -07:00 |
vllm
|
fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (#1969)
|
2025-04-23 15:33:19 +02:00 |
watsonx
|
fix: updated watsonx inference chat apis with new repo changes (#2033)
|
2025-04-26 10:17:52 -07:00 |
__init__.py
|
impls -> inline , adapters -> remote (#381)
|
2024-11-06 14:54:05 -08:00 |