mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
adding the --gpu all flag to Docker run commands for meta-reference-gpu distributions ensures models are loaded into GPU instead of CPU. Remove docs for meta-reference-quantized-gpu The distribution was removed in #1887 but these files were left behind. Fixes: #1798 # What does this PR do? Fixes doc to add --gpu all command to docker run [//]: # (If resolving an issue, uncomment and update the line below) Closes #1798 ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] verified in docker documentation but untested --------- Signed-off-by: Derek Higgins <derekh@redhat.com> |
||
---|---|---|
.. | ||
bedrock.md | ||
cerebras.md | ||
dell-tgi.md | ||
dell.md | ||
fireworks.md | ||
groq.md | ||
meta-reference-gpu.md | ||
nvidia.md | ||
ollama.md | ||
passthrough.md | ||
remote-vllm.md | ||
sambanova.md | ||
tgi.md | ||
together.md |