llama-stack-mirror/llama_stack/templates
2024-11-18 14:54:20 -08:00
..
bedrock Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
databricks Split safety into (llama-guard, prompt-guard, code-scanner) (#400) 2024-11-11 09:29:18 -08:00
fireworks Move run-*.yaml to templates/ so they can be packaged 2024-11-18 14:54:20 -08:00
hf-endpoint Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
hf-serverless Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
inline-vllm Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
meta-reference-gpu Move run-*.yaml to templates/ so they can be packaged 2024-11-18 14:54:20 -08:00
meta-reference-quantized-gpu Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
ollama Move run-*.yaml to templates/ so they can be packaged 2024-11-18 14:54:20 -08:00
remote-vllm Move run-*.yaml to templates/ so they can be packaged 2024-11-18 14:54:20 -08:00
tgi Move run-*.yaml to templates/ so they can be packaged 2024-11-18 14:54:20 -08:00
together Move run-*.yaml to templates/ so they can be packaged 2024-11-18 14:54:20 -08:00
__init__.py Start auto-generating { build, run, doc.md } for distributions 2024-11-15 14:17:16 -08:00
template.py Add ollama/pull-models.sh 2024-11-18 11:44:03 -08:00