llama-stack/llama_stack/templates/meta-reference-gpu
Derek Higgins 0e4307de0f
docs: Fix missing --gpu all flag in Docker run commands (#2026)
adding the --gpu all flag to Docker run commands
for meta-reference-gpu distributions ensures models are loaded into GPU
instead of CPU.

Remove docs for meta-reference-quantized-gpu
The distribution was removed in #1887
but these files were left behind.


Fixes: #1798

# What does this PR do?
Fixes doc to add --gpu all command to docker run

[//]: # (If resolving an issue, uncomment and update the line below)
Closes #1798

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

verified in docker documentation but untested

---------

Signed-off-by: Derek Higgins <derekh@redhat.com>
2025-04-25 12:17:31 -07:00
..
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
build.yaml Rename builtin::memory -> builtin::rag 2025-01-22 20:22:51 -08:00
doc_template.md docs: Fix missing --gpu all flag in Docker run commands (#2026) 2025-04-25 12:17:31 -07:00
meta_reference.py fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
run-with-safety.yaml feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
run.yaml feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00