| .. |
|
anthropic
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
bedrock
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
cerebras
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
cerebras_openai_compat
|
apis, alt
|
2025-05-18 21:21:49 -07:00 |
|
databricks
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
fireworks
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
fireworks_openai_compat
|
apis, alt
|
2025-05-18 21:21:49 -07:00 |
|
gemini
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
groq
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
groq_openai_compat
|
apis, alt
|
2025-05-18 21:21:49 -07:00 |
|
llama_openai_compat
|
apis, alt
|
2025-05-18 21:21:49 -07:00 |
|
nvidia
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
ollama
|
apis, alt
|
2025-05-18 21:21:49 -07:00 |
|
openai
|
feat: use openai-python for openai inference provider (#2193)
|
2025-05-16 12:57:56 -07:00 |
|
passthrough
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
runpod
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
sambanova
|
feat(providers): sambanova updated to use LiteLLM openai-compat (#1596)
|
2025-05-06 16:50:22 -07:00 |
|
sambanova_openai_compat
|
apis, alt
|
2025-05-18 21:21:49 -07:00 |
|
tgi
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
together
|
fix: revert "feat(provider): adding llama4 support in together inference provider (#2123)" (#2124)
|
2025-05-08 15:18:16 -07:00 |
|
together_openai_compat
|
apis, alt
|
2025-05-18 21:21:49 -07:00 |
|
vllm
|
fix: multiple tool calls in remote-vllm chat_completion (#2161)
|
2025-05-15 11:23:29 -07:00 |
|
watsonx
|
chore: enable pyupgrade fixes (#1806)
|
2025-05-01 14:23:50 -07:00 |
|
__init__.py
|
impls -> inline, adapters -> remote (#381)
|
2024-11-06 14:54:05 -08:00 |