llama-stack-mirror/llama_stack/providers/remote/inference
2025-03-01 12:48:08 -08:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
cerebras fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
databricks fix: resolve type hint issues and import dependencies (#1176) 2025-02-25 11:06:47 -08:00
fireworks feat: add (openai, anthropic, gemini) providers via litellm (#1267) 2025-02-25 22:07:33 -08:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
nvidia chore(lint): update Ruff ignores for project conventions and maintainability (#1184) 2025-02-28 09:36:49 -08:00
ollama feat(providers): support non-llama models for inference providers (#1200) 2025-02-21 13:21:28 -08:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
runpod feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
sambanova feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
sample build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
tgi feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
together feat(providers): support non-llama models for inference providers (#1200) 2025-02-21 13:21:28 -08:00
vllm chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00