llama-stack-mirror/llama_stack/providers/remote/inference
2025-05-14 12:46:24 +02:00
..
anthropic chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
bedrock chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
cerebras chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
cerebras_openai_compat chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
databricks chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
fireworks chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
fireworks_openai_compat chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
gemini chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
groq chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
groq_openai_compat chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
llama_openai_compat chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
nvidia chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
ollama fix: ollama openai completion and chat completion params (#2125) 2025-05-12 10:57:53 -07:00
openai chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
passthrough chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
runpod chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
sambanova feat(providers): sambanova updated to use LiteLLM openai-compat (#1596) 2025-05-06 16:50:22 -07:00
sambanova_openai_compat chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
tgi chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
together fix: revert "feat(provider): adding llama4 support in together inference provider (#2123)" (#2124) 2025-05-08 15:18:16 -07:00
together_openai_compat chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
vllm Resolving merge conflicts. 2025-05-14 12:39:32 +02:00
watsonx chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00