llama-stack-mirror/llama_stack/templates/dev
2025-03-04 20:35:46 +01:00
..
__init__.py feat: add (openai, anthropic, gemini) providers via litellm (#1267) 2025-02-25 22:07:33 -08:00
build.yaml feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
dev.py fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
run.yaml Updated the configuration files to include the preprocessor resource. 2025-03-04 20:35:46 +01:00