llama-stack-mirror/llama_stack/providers/remote/inference
2025-04-07 11:01:24 -07:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock several fixes 2025-04-07 10:31:20 -07:00
cerebras several fixes 2025-04-07 10:31:20 -07:00
databricks several fixes 2025-04-07 10:31:20 -07:00
fireworks several fixes 2025-04-07 10:31:20 -07:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
nvidia several fixes 2025-04-07 10:31:20 -07:00
ollama several fixes 2025-04-07 10:31:20 -07:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough fix: passthrough impl response.content.text (#1665) 2025-03-17 13:42:08 -07:00
runpod test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
sambanova several fixes 2025-04-07 10:31:20 -07:00
tgi chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
together several fixes 2025-04-07 10:31:20 -07:00
vllm revert some unintentional changes by copying source of truth to llama-models 2025-04-07 11:01:24 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00