llama-stack-mirror/llama_stack/templates
2024-12-04 09:49:35 -08:00
..
bedrock all distros 2024-12-03 20:49:30 -08:00
cerebras Cerebras Inference Integration (#265) 2024-12-03 21:15:32 -08:00
fireworks all distros 2024-12-03 20:49:30 -08:00
hf-endpoint run-with-safety memory 2024-12-03 20:54:59 -08:00
hf-serverless run-with-safety memory 2024-12-03 20:54:59 -08:00
meta-reference-gpu run-with-safety memory 2024-12-03 20:54:59 -08:00
meta-reference-quantized-gpu all distros 2024-12-03 20:49:30 -08:00
ollama run-with-safety memory 2024-12-03 20:54:59 -08:00
remote-vllm run-with-safety memory 2024-12-03 20:54:59 -08:00
tgi run-with-safety memory 2024-12-03 20:54:59 -08:00
together override faiss memory provider only in run.yaml 2024-12-03 20:41:44 -08:00
vllm-gpu all distros 2024-12-03 20:51:21 -08:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
template.py add all providers 2024-12-03 20:31:34 -08:00