llama-stack-mirror/llama_stack/templates
Derek Higgins 0e4307de0f
docs: Fix missing --gpu all flag in Docker run commands (#2026)
adding the --gpu all flag to Docker run commands
for meta-reference-gpu distributions ensures models are loaded into GPU
instead of CPU.

Remove docs for meta-reference-quantized-gpu
The distribution was removed in #1887
but these files were left behind.


Fixes: #1798

# What does this PR do?
Fixes doc to add --gpu all command to docker run

[//]: # (If resolving an issue, uncomment and update the line below)
Closes #1798

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

verified in docker documentation but untested

---------

Signed-off-by: Derek Higgins <derekh@redhat.com>
2025-04-25 12:17:31 -07:00
..
bedrock chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
cerebras chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
ci-tests test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
dell chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
dev fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
experimental-post-training fix: fix experimental-post-training template (#1740) 2025-03-20 23:07:19 -07:00
fireworks test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
groq fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
hf-endpoint chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
hf-serverless chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
meta-reference-gpu docs: Fix missing --gpu all flag in Docker run commands (#2026) 2025-04-25 12:17:31 -07:00
nvidia feat: NVIDIA allow non-llama model registration (#1859) 2025-04-24 17:13:33 -07:00
ollama chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
open-benchmark chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
passthrough chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
remote-vllm docs: Add tips for debugging remote vLLM provider (#1992) 2025-04-18 14:47:47 +02:00
sambanova test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
tgi chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
together test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
verification fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
vllm-gpu chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
watsonx feat: Add watsonx inference adapter (#1895) 2025-04-25 11:29:21 -07:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
dependencies.json feat: Add watsonx inference adapter (#1895) 2025-04-25 11:29:21 -07:00
template.py feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00