llama-stack-mirror/llama_stack/templates
2024-11-18 22:39:45 -08:00
..
bedrock Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
databricks Split safety into (llama-guard, prompt-guard, code-scanner) (#400) 2024-11-11 09:29:18 -08:00
fireworks Use HF names for registering fireworks and together models 2024-11-18 22:34:47 -08:00
hf-endpoint Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
hf-serverless Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
inline-vllm Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
meta-reference-gpu Add conda_env 2024-11-18 16:08:14 -08:00
meta-reference-quantized-gpu Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
ollama More documentation fixes 2024-11-18 17:06:13 -08:00
remote-vllm Update to docs 2024-11-18 16:52:48 -08:00
tgi Add conda_env 2024-11-18 16:08:14 -08:00
together together default 2024-11-18 22:39:45 -08:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
template.py Add conda_env 2024-11-18 16:08:14 -08:00