llama-stack-mirror/llama_stack/templates/meta-reference-gpu
Derek Higgins ec4c04fa2d docs: Fix missing --gpu all flag in Docker run commands
dding the --gpu all flag to Docker run commands
for meta-reference-gpu distributions ensures models are
loaded into GPU instead of CPU.

Fixes: #1798

Signed-off-by: Derek Higgins <derekh@redhat.com>
2025-04-25 12:38:37 +01:00
..
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
build.yaml Rename builtin::memory -> builtin::rag 2025-01-22 20:22:51 -08:00
doc_template.md docs: Fix missing --gpu all flag in Docker run commands 2025-04-25 12:38:37 +01:00
meta_reference.py fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
run-with-safety.yaml feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
run.yaml feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00