llama-stack-mirror/llama_stack/providers/remote/inference
Chantal D Gama Rose 7e211f8553 pre-commit fixes
2025-03-14 13:56:05 -07:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock pre-commit fixes 2025-03-14 13:56:05 -07:00
cerebras pre-commit fixes 2025-03-14 13:56:05 -07:00
databricks pre-commit fixes 2025-03-14 13:56:05 -07:00
fireworks pre-commit fixes 2025-03-14 13:56:05 -07:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
nvidia pre-commit fixes 2025-03-14 13:56:05 -07:00
ollama pre-commit fixes 2025-03-14 13:56:05 -07:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough pre-commit fixes 2025-03-14 13:56:05 -07:00
runpod pre-commit fixes 2025-03-14 13:56:05 -07:00
sambanova pre-commit fixes 2025-03-14 13:56:05 -07:00
tgi pre-commit fixes 2025-03-14 13:56:05 -07:00
together pre-commit fixes 2025-03-14 13:56:05 -07:00
vllm pre-commit fixes 2025-03-14 13:56:05 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00