llama-stack-mirror/llama_stack/templates/dev
2025-03-25 14:42:05 -07:00
..
__init__.py feat: add (openai, anthropic, gemini) providers via litellm (#1267) 2025-02-25 22:07:33 -08:00
build.yaml feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
dev.py fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
run.yaml chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00